Making Choices With Data And AI

Post written by

Mark Robinson

Mark Robinson is the Co-Founder and Chief Marketing Officer of Kimble Applications.

This is a story about how something I learned on a routine plane journey changed my understanding of how we should use artificial intelligence (AI) in the workplace -- why we should use it to support human decision making. Humans need help to make good choices. We respond to context and to our own unconscious bias as much as to data. Presenting people with facts often isn’t enough; they need something called "choice architecture" to help to nudge them toward the logical next step.

Here is an example from my own experience of choice depending on context.

I fly a lot. I’ve got a standard routine. Before I leave home or the office to catch a flight, I print off all the documents I have been asked to comment on. During the 30 minutes or so after the seat belt sign goes on and the laptop has to go away, I read the printed copy. As soon as the light goes off, the laptop comes out, I email comments and clear down the sort of tasks you keep putting off but that give you a sense of satisfaction when you complete them. Then a meal arrives, the laptop goes away and I switch on the in-flight entertainment system to watch while I’m eating.

A few months ago, I have reached a tipping point in my routine. After a frantic search for a suitable film or show to watch, I realized due to my accumulation of flights that I’ve managed to watch everything I want to watch as well as several things I wished I hadn’t. In desperation, I try what is called the learning channel. I’m confronted by two options: an Oxford Union Lecture featuring actor Paul Giamatti and an interview with Michael Lewis on his latest book on economics. I plump for Paul Giamatti on the basis that the film Sideways, a movie about a wine tasting tour of Sonoma Valley, is one of my favorite films. Honestly, it’s a bit dull and 30 minutes later, as I start on a chocolate dessert thingummy, I realize I’m going to have to watch the economics lecture! It was a full 90 minutes -- a fully captivating 90 minutes.

As it turns out, I was so captivated that I watched it again (and even wrote notes!). On the flight home, I watched it again. I even bought the book Michael Lewis plugged during the lecture -- The Undoing Project. I had stumbled into choosing something that was going to change the way I understand how AI can be used to support better decision making.

So, what was it that so piqued my interest? Well, first let me tell you about Michael Lewis, with whom I am ashamed to say I was unfamiliar until I watched this. Lewis is an economics journalist who has a knack for making apparently boring subjects interesting, funny and understandable to people like me who wouldn’t go near an economics book. Many have been made into successful films like The Big Short, Moneyball and The Blind Side. His latest book tells the story of the lives of Amos Tversky and Daniel Kahneman, two Israeli cognitive psychology experts, one of whom won a Nobel prize for economics (strangely enough, I also learned that there is no Nobel prize for psychology). Anyway, it turns out that through decades of research, they pretty much wrote all the definitive works on understanding why we make decisions.

The essence of their research boils down to what is called System 1 and System 2 thinking. These are the ways the brain makes decisions. System 1 is gut-feel and System 2 is the more calculated decisions where you consider all the variables. What they discovered over two decades of research is that while we think we make most decisions using the System 2 approach, in reality, 95% of the time we simply rely on gut-feel.

Applying psychology to economics, they were able to prove that people are really not as rational as they think they are -- or as economists used to think they were. When we create stories, most people’s gut-feel seemed to override logic and they fell into the "trap" of displaying irrational thinking.

Another finding was that the same dilemma posed in different ways could elicit a risk-avoidance or a risk-taking response. This work exposed human weaknesses that economists had simply not noticed before.

I then started thinking about what that means for using AI in the workplace. We hear a lot about workplace technology that can provide data. We have tended to imagine that decision making should be left to humans. But if we realize that people are prone to making snap decisions regardless of the data that is provided to them, then we can use AI to point out the logic in the situation -- perhaps to create a kind of AI-based Dr. Spok, the logical Vulcan character in Star Trek who advised Captain Kirk.

What the Nobel prize-winning science tells us is that we need help in making decisions and we need help in interpreting data; we don’t need more data, we need more help to see the forest for the trees. We need to be protected from letting our in-built biases let us unwittingly make the wrong decisions. We need the confidence that if we delegate decision making to less experienced colleagues, there is help to guide them in making the right decision at the right time.

AI can help. It is not about AI making the decisions for us -- it’s about AI guiding us to make the right decisions, consistently, and using a disciplined process. We can use AI to decode the background noise so we can see a choice for what it is.

One day when I get on a plane, maybe there will be an AI-enabled bot that will suggest what I might enjoy watching, and perhaps it will also recommend that I avoid the dessert. Like Captain Kirk himself, however, I am not promising to go along with all the decisions it may suggest!

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?