AI and machine learning algorithms have become ubiquitous. But with the accelerating pace of innovation, the need to explain how an AI makes decisions – known as explainability or “XAI” – has become one of the most critical issues facing tech businesses.
At the end of last month, I attended the Open Data Science Conference in London. The focus of the conference is machine learning and AI in practice, bringing together more than 2,000 practitioners and leading experts from across the world.
As a professional working at the nexus of data science and strategic communications, it is clear that we need a new language that keeps pace with the innovation so we can properly explain the evolving role of AI. This language must bridge the gap between technical innovation and societal expectations.
The technical challenge of XAI is not new, and the data science community is not lacking in technical prowess or ingenuity, and relatively new techniques elucidate what used to be referred to as a black box.
However, as businesses overcome the technical challenges of harnessing AI and machine learning algorithms, new reputational challenges emerge: the question is how an AI technically makes decisions, as well as how business leaders justify those decisions. This is particularly true in regulated industries such as financial services and healthcare.
When leaders take strategic business decisions they are expected to explain those decisions to their stakeholders. When companies implement AI as a core part of their business model, they are now increasingly being asked to explain the decisions of their algorithm as well. The decisions and development process of AI can have a significant impact on corporate reputations. Most leaders are prepared to defend their decisions, but very few are ready to defend those of their algorithms.
Last month Apple faced criticism when it was reported that the algorithm for their Apple Card discriminated against women by offering them smaller lines of credit than men. The issuer of the credit card, Goldman Sachs, rejects any claims of gender bias claiming that the algorithm doesn’t take gender as an input. Wall Street regulators have now announced an investigation.
Although banks can easily infer your gender from what you put in your shopping basket, the mechanics of the algorithm is not the case in point. The criticism is directed at the ethics and fairness of the outcome rather than the algorithm’s decision-making process.
But scrutiny can also be aimed at the development process. Take, for example, the controversy surrounding the collaboration between the NHS and DeepMind Health. The collaboration has undoubtedly led to innovations that will benefit NHS patients, including an AI that detects eye disease from retina scans more accurately than humans. Nevertheless, fresh concerns were put forward when DeepMind Health was swallowed up by its parent company Google Health in September. Corporate partnerships with the NHS will always be contentious, but the example illustrates that ethics and transparency in the process is equally important – regardless of the outcomes.
Companies that implement AI must quickly accept that their commercial success relies just as much on effective communications and stakeholder engagement as it does on technical expertise. Just this May, the Brookings Institution published a report on AI bias detection and mitigation. The report highlights the critical role of cross-functional teams in the development process, including communications professionals as well as technical and legal experts.
To manage reputations in the age of AI, we must understand what AI does – technically as well as contextually. Communications professionals will need a fundamental understanding of how AI technically works. Data scientists face this challenge in reverse: to understand how legal, ethical and fairness definitions impact corporate reputations and AI in practice.
This will require a new, shared language to explain the details of technical innovations and to negotiate the societal and stakeholder expectations of these outputs.