Explainable AI Is The Next Big Thing In Accounting And Finance

Post written by

Aaron Harris

Aaron Harris is CTO of Sage, leading the Emerging Technology and Collective Intelligence teams to make Sage a great SaaS company.

Getty
Cloud computing has ushered us into an era where computer processing power is virtually unlimited. This availability of unlimited computing power has led to breakthroughs in computing like AI, which is driving the next business revolution. AI promises human-like thinking at computer-like speed. The coming revolution is all about AI freeing people from low-value, repetitive activities so they can focus on high-value, strategic activities.

Much of what we see in AI today is working to reproduce the way natural intelligence works, with the hopes that we’ll get to human-level decisions that are faster and more circumspect. For example, in the world of accounting, an application of AI would be to identify spending variance (i.e., transactions that break from normal practices of the company or industry).

Human beings can identify spending trends and variance in a group of a few hundred transactions, most of which get identified as false positives after further investigation. In the same or less time, AI can identify spending trends and variance across billions of transactions and perform further investigation to eliminate false flags within milliseconds. The concern in this scenario is that the “further investigation” is some nebulous black box that we are expected to trust. This element of trust brings us to explainable AI.

Explainable AI has the added requirement of being able to explain to a human being how it arrived at a conclusion. We have already seen this as a mandate in medical diagnostic software, where the control of virtually all decision parameters has to reside with the physician. This allows the physician to not only control the process but ultimately be able to explain and trust the outcome. In our accounting example above, the AI would need to explain, in human terms, how it selected variance transactions and how it chose which ones to eliminate as false flags. And it would need to return this explanation with the results.

Putting this demand on AI isn’t any different than the demand we have always placed on expert decision-makers. When we take our cars to a mechanic, we expect an explanation of the diagnosis and the services to be performed. We do the same with our dentists, lawyers and stockbrokers. Part of what makes a services professional trusted in his or her job is the ability to explain the decision-making process. We are now putting that responsibility on AI. We are essentially saying to AI, “Before I buy into your prognosis of the situation, tell me how you arrived at it.”

Answering this demand will put a burden on technologists who need to find the delineation point between what AI can do and readily explain and what needs to be left to the strategic thinking and creativity of people. In accounting and finance, this will likely mean something as simple as dividing big data tasks, which make more sense for early AI application, from qualitative concerns, which often have complexities and nuances. It may also mean something as complex as codifying GAAP for the AI to use and then reference in explanations. GAAP is very much a human interpreted set of principles, so codifying it may also require building in levers to be adjusted according to the judgment of the accounting and finance professionals.

An area of investment for my company is continuously evaluating business activity to identify changes in performance trends. A change in trend could signal a coming opportunity or risk. This is an excellent example of explainable AI. With explainable AI, the product will spot the change but then list and rank the factors driving the change. If, for example, AI spots a renewal risk on a large contract, a finance leader can adjust plans by shifting more resources to address the specific contract.

Among companies pushing the envelope and developing explainable AI is Chatterbox Labs, a technology company with offices in London and New York. They’re applying several methodologies to develop the next generation of AI to various data-driven business functions with a focus on explainable AI.  They see major applications for explainable AI in areas that are both complex and highly regulated, namely healthcare, financial services and pharmaceuticals.

As we ask finance and accounting professionals to weigh in on codifying the complex principles and controlling the levers that guide the AI, we will also need to raise the bar on their understanding of data and how their input might affect results. This doesn’t mean that we make every accounting expert a data scientist, but we will have to engage business educators and educational institutions in preparing professionals for this new paradigm.

Though we may never separate the finance expert from his or her spreadsheets, we will need them to expand their view of how data interacts and what explainable AI is really telling them. This might be a new role or expertise in the world of accounting and finance, one that involves building out translation models from codified AI processes to human-readable explanations.

One of the overarching keys to AI success is the level of trust between humans and the AI systems supplementing human activity. AI solutions need to not only achieve high levels of accuracy, but they must also set reasonable expectations. On the human side, it helps to have a baseline understanding of how the technology works. I do believe the future’s most successful accountants will have a firm grip on statistics and computer algorithms.

As AI and machine learning advance, requiring explainable AI and creating verifications of those explanations will become a check against malicious AI or AI that has simply gone off the rails. Explainable AI then becomes a security mechanism for stakeholders to ensure that AI is doing what it is intended to do. The future of AI seems limitless and beneficial as long as we approach it with the same kind of rational thought and personal responsibility we use when interacting with the human experts we rely on to help us make decisions in our lives.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Aaron Harris is Senior Vice President and Head of Engineering and Technology for Sage Intacct.

Forbes Technology Council is an invitation-only, fee-based organization comprised of leading CIOs, CTOs and technology executives. Find out if you qualify at forbestech...