Total Business Magazine

AI Ethics: End Users Need Trust, Not Explainability

Ved Sen, Digital Evangelist at Tata Consultancy Services, discusses how users can’t, and don’t need, to understand AI since we don’t understand many of the products we use already. The debate about explainable AI therefore needs to be held within businesses. For users, it’s a trust issue, not explainability.

 

Many years ago, when my father went shopping for fish in Calcutta, he would follow a simple rule. Either you know your fish, or you know your fishmonger. And therein lies the distinction between trust and explainability that is at the heart of the AI debate.

AI and algorithms can drive amazing results – from scans and images, or even our sounds and microbial information, to tell us things that are impossible for humans to spot or deduce. This allows us to fight diseases like cancer and extend life spans. Beyond healthcare, AI can give us better outcomes in diverse areas from seismology to Google searches, and from IT Infrastructure to fraud prevention.

Businesses use AI for automating decisions. For example, consider a home loan application, or a medical insurance claim. On the one side is a consumer or patient for whom the outcome has significant personal and emotional implications and costs. The business, on the other hand, are trying to take emotion out of the equation. This is typically an area where an AI tool might make a decision based on a learning algorithm. This poses an immediate problem whereby the black-box model computes an outcome in a way that is not easily interpreted, so the person being rejected for a loan can’t understand why. Further, it is possible that a learning algorithm evolves, so the computation used for rejecting one candidate and approving the next one, may actually be different. And although it may be ‘better’, it hardly seems fair.

As we allow more and more decisions to be made algorithmically, the need for explainable AI has grown. We recently partnered with Ditto who look to bring expert knowledge into systems, via an explainable AI method.

However, in our everyday lives we interact with complex products and services, from hip surgery, to fund management, or from combi-boilers, to hybrid cars, and airplanes to bridges. In doing so we implicitly trust their capability without any real understanding of how they work. Even when our very lives are at stake.

 

So why do we seek explainability in AI, when we don’t actually have it in many areas of our lives? 

Going back to my shopping analogy, when buying fresh vegetables or fish, you might check them for freshness. When we have trust, we don’t seek an explanation. A brand is nothing but the bypassing of user testing for every product we use. You don’t need to test run every pair of Nike shoes you buy. Nor do you perform UAT while buying MS Word. We are used to trusting brands in implicit ways.

Trust is a multi-faceted entity. It implies competence, but also intent. When we trust a plane with our lives as a passenger, we are counting both on the competence and the intent of the airline. Whereas when we apply for a loan or an insurance claim, we don’t usually question the competence but rather the organisation’s intent. This is when transparency starts to play a role.

The need for transparency often arises from our lack of trust in the provider. If we could trust a provider, then we might trust their AI as well. For example, there is a whole lot of technology (some of it intelligent) that goes into cars, but we don’t queue up for an explanation of how it works before we drive one.  When we don’t trust a provider, explainable AI becomes a real need. We want to know why we are being excluded from an option or being given an answer which may feel like an opaque judgement.

 

Which brings us to the second question: how can we build trust? 

Trust is built over time and through consistency. Trust is often brittle, but also strangely resilient. People may feel differently about you as a brand but they may continue to interact with you and trust you for your services if you work to address problems transparently. This might be driven by a lack of alternatives, or simply that one mishap does not break a long relationship.

However, to truly build the type of trust that would preclude explainability takes years and ironically, transparency. In the light of the recent crashes, Boeing is no doubt aware that it might be a long time before people are willing to implicitly trust its AI tools. And Facebook is similarly addressing the lack of faith in Facebook algorithms in light of recent data misuse. The financial crash of 2008 still casts a shadow over people’s trust in banks.

 

So, what does that mean for explainable AI?

If you were flying onboard an autonomous plane, you would absolutely need to trust the AI but you probably wouldn’t need, or want, to understand exactly how that AI works. However, from the perspective of the provider, a high level of explainability would be essential. You would want replicability of decisions, simulation of a vast number of situations and a near guarantee that the AI will work in both predictable and safe ways.

As algorithms evolve and learning systems become more sophisticated, as businesses, we must be able to track their progress, provenance and decision flows. We also need to guard against the spectre of rogue AI, no matter how vague it seems. We know already the perils of biases in data and training. There are examples of AI that have outfoxed experts. Notably in chess, when AI programs have made counter-intuitive moves that the masters have questioned only to realise that the AI was being smarter than they thought.

Explainable AI is therefore not a consumer phenomenon. It’s a feature that organisations need to integrate so that in case of an adverse event or a questionable outcome, our experts can decipher the rationale and conclude if the AI has ‘gone rogue’, made a mistake, or simply jumped ahead of the game. What consumers need is trustable AI, and that means AI that comes from trustable organisations.

Leave A Reply

Your email address will not be published.