According to Infosys, 90% of C-level executives report measurable benefits from implementing AI within their organization. However, as examples continue to emerge of bias in Machine Learning models, there is also a justified concern about the unintended consequences of a technology that is still not well understood. And that concern is mounting.
Independent AI expert Teresa Escrig (a research scientist, former AI global lead on Cognitive Computing and Computer Vision at Accenture, and author of the new book, Safe A.I.) spoke with Business Talent Group about the dilemma. In our latest Expert Q&A, Escrig talks about implementing AI—and how executives can harness the technology’s benefits while avoiding pitfalls and delivering both transparency and ROI to stakeholders.
New AI applications and use cases seem to come out every day. How should executives start implementing AI and bringing those technologies into their operations?
I always say that the best way to start the AI journey is by “owning” the organization’s data. That means establishing an automatic way to pull data from the different silos within an organization into a Knowledge Graph or Ontology. It is very empowering for C-level executives to have immediate access to their organization’s most relevant data and most valuable insights.
Creating a Knowledge Graph also means that executives don’t have to wait until they have large amounts of clean, unbiased data, which they’d need before they can apply Machine Learning models. Instead, they can generate valuable business insights using the data they have. Machine Learning models might come later in the AI journey if there is a business value that justifies it. At that point, the insights that the Knowledge Graph provides can be used as input into the learning process to ensure that the decisions made by the AI engine are transparent.
What other methods do you recommend for surfacing the best opportunities?
I’ve developed a methodology that encourages executives to think about how they would like to see their organization transformed in the next one, three, and five years. What challenges would they like to have solved? What would their products and services look like? Then, I ask them to assign to these ideas both an approximate cost of development and a business value. Finally, we organize the ideas in a two-dimensional chart, yielding a prioritized list of projects that will bring the highest ROI at the lowest cost.
How do you calculate the ROI of implementing AI when there are so many unknowns?
There are mechanisms that help assign an approximate ROI to each project. For example, how much market share would an organization gain if it is the first to solve a particular challenge in their industry? Solving current problems and even transforming organizations will bring value from AI, but only to a point. The greatest value from AI will come from new business models that organizations will discover while solving the current challenges in their industry. To get that benefit, it is mandatory that they invest in reskilling their employees, who already know their industry. We have seen many examples of this phenomenon in the big tech companies, Google, Apple, Amazon. They have branched into many other business areas that they didn’t start with.
What are some of the most common misconceptions about implementing AI in the business world?
The biggest misconception about AI in the industry is that it’s the same thing as Machine Learning. In fact, Machine Learning is only a subset of AI technologies, among many others that the industry has not yet exploited. This has caused a number of recent setbacks in the AI field.
To name a few, Machine Learning requires large amounts of data to generate meaningful results, and most organizations don’t have full control over their data, which remains in silos. Machine Learning can mainly do classification and prediction, so data scientists have been looking for challenges in their organizations that fit this type of technology. The data used have not always been debiased, and we have seen much wrongdoing from some Machine Learning models. This technology is opaque, and nobody understands why it works. When it fails—and it does—whole industries, like the autonomous vehicles industry, face mandatory halts that affect writing code and testing on roads.
But if we consider all AI technologies as tools in the toolkit—not just Machine Learning—we will be able to focus on the challenges that each industry is facing and stop focusing on the technology. The AI winners and disrupters will be organizations that understand the power of AI in this broader perspective and create a robust strategy that focuses on solving challenges, improving products and services, and discovering new business models along the way.
In your book, Safe A.I., you write that executives have a responsibility to understand what they are funding—and you try to explain AI concepts in a way that everyone can understand.
Although most organizations have already begun to adopt AI in their business, only 17% of companies have an AI strategy in place and know how to source the data that will allow AI to work. Many organizations still lack the foundational practices they need to create value from AI at scale. Data scientists are in charge of finding the projects they will work on. The ROI is not always clear, nor is the impact to the brand if the behavior of the model deviates from what is intended. Most of the decisions made by an AI engine are business executives’ concern. However, most of the time, business executives are not involved in the process.
That’s why I wrote Safe A.I. The book is designed to empower non-technical executives by educating them about what responsible and safe AI is and what it can do for them—so they can forget about the technology and focus on the challenges that they want to see solved. I explain, with personal stories, three things: (1) why Machine Learning and Deep Learning algorithms are considered black boxes, and why it is dangerous to have only this type of AI in your organization; (2) a simple-to-understand overview of other AI technologies that provide transparency to boost trust for stakeholders; and (3) a step-by-step process to define a Responsible AI Journey for organizations that will allow them to harness the benefits of AI and avoid the pitfalls.
I am also a keynote speaker at corporate conferences and meetings, where I introduce the concept of Safe AI in a very practical way. I conduct corporate workshops to help executives to define their Safe AI strategies. In a weekend, they come up with a Responsible AI strategy that they can present to their board and use in the hiring process to solve the problems that they define, with highest ROI and no pitfalls.
One example from your work at Accenture was the development of a module that provided transparency into the decisions made by autonomous vehicles.
Autonomous Vehicles contain many black box Machine Learning models. They do not give any context to the decisions that are made by the car. When an accident happens, the engineers don’t know where to go in the code to understand what happened. Quality control of Autonomous Vehicle algorithms is a nightmare.
We built a module that was inserted between the Machine Learning algorithms to associate the decisions made by the car with the context of the car in real time. The context might include things like how many lanes there were, whether there were cars around the autonomous vehicle, and, if so, where they were and what they were doing. The result was a report of the behavior of the autonomous vehicle and the reasons behind that behavior. When something went wrong, the engineers knew how to fix it. It reduced the cost of algorithm development by up to 30%, and most importantly, by increasing transparency, it increased trust in all stakeholders.
About the AuthorMore Content by Leah Hoffmann