We Must Recognize Just How Brittle And Unpredictable Today's Correlative Deep Learning AI Is

Getty Images


AI is everywhere today. It has become the modern-day fairy dust sprinkled over any and every project no matter how little relevance it holds to the task at hand. Robust hand-coded rulesets and stable existing machine learning solutions are being ripped out and replaced with deep learning solutions at an increasing pace. Yet few companies fully understand just how brittle and unpredictable today’s correlative deep learning AI is and how its moments of astonishingly human-like accuracy are matched by catastrophic failure at the most unexpected moments. For AI to succeed companies must learn to distinguish hype from reality and understand the ways in which AI’s brittleness and unpredictability may adversely affect their businesses.

While the deep learning revolution might invoke the imagery of silicon incarnations of biological brains, with digital neurons firing together to form complex learning and decision-making machines, the reality is far less glamorous.

Today’s deep learning revolution is powered by simplistic statistical engines designed to extract large numbers of subtle correlations from data and encode them in software.

In many ways they can be seen as merely supercharged Pearson correlations with a few extra twists, but all of the same limitations.

Like their mundane correlative brethren, deep learning algorithms merely search the data in front of them for relatedness of inputs and outputs.

They no more “learn” or “reason” about the world than a Pearson correlation “learns” or “reasons” about a spreadsheet. They simply encode the data before them.

This means that the entire worldview of a deep learning algorithm is limited to the data it is trained upon, with its strengths and weaknesses defined by the skill of the developer building it in selecting a properly diverse set of inclusive and exclusive examples and adjusting its various algorithmic dials.

The black boxes that result from these training processes are extraordinarily brittle, with the contours of their encoded patterns entirely unknown.

Feed a deep learning algorithm data that is within the sweet spot of its learned correlations and it will yield uncannily human-like accuracy. Venture to close to the unknown edge of its encoded patterns and it will fail spectacularly.

Deep learning systems are vastly less stable than the hand coded or classical systems they replace. A statistical machine translation system typically produces reasonable quality content with consistent accuracy that varies little. A neural translation system, on the other hand, produces little medium-quality content, instead oscillating wildly between the two extremes of human fluency and near-gibberish.

Whereas consumers of statistical translation could adjust their workflows to accommodate its idiosyncrasies, neural translation is entirely unpredictable with wild swings in its accuracy and little ability to predict where it will perform well or poorly. Changing even a single mundane word in a sentence can cause a neural translator to tumble into chaos while statistical systems exhibit only a minor change in their output.

Putting this all together, companies that accept and fully understand the brittleness and unpredictability of their deep learning systems can leverage them to great effect.

Indeed, driverless cars often combine deep learning systems with hand coded manual rulesets in the places where unpredictability would introduce too much risk. So too can companies work around the limitations of deep learning, building the necessary sandboxes and operational isolation to allow them to shine their brightest and catch them when they fail.

Unfortunately, few companies today have the deep learning expertise to understand these limitations. Instead, they rush ahead with deployments of barely trained algorithms with entirely unknown failure conditions that introduce extraordinary business risk without a clue as to just how dangerous their new deployment really is.

In the end, as the hype and hyperbole fades, companies should spend a bit more time understanding the reality of the shiny new AI systems they are introducing into their companies and betting their futures upon before that shine turns to darkness at the worst possible moment.

Based in Washington, DC, I founded my first internet startup the year after the Mosaic web browser debuted, while still in eighth grade, and have spent the last 20 years...