Explainability in AI: The Key to Trustworthy AI Decisions
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

AI: The Next Transformation

Explainability in AI: The Key to Trustworthy AI Decisions

/ Brief

In an age where artificial intelligence (AI) is transforming industries across the globe, the concept of explainable AI (XAI) has emerged as a critical factor in building trust and transparency in automated decision-making. Unlike traditional AI models that often operate as “black boxes,” XAI aims to make the decision-making process understandable and interpretable to human users.

Trusted Insights for What's Ahead™

In an age where artificial intelligence (AI) is transforming industries across the globe, the concept of explainable AI (XAI) has emerged as a critical factor in building trust and transparency in automated decision-making. Unlike traditional AI models that often operate as “black boxes,” XAI aims to make the decision-making process understandable and interpretable to human users.

Trusted Insights for What's Ahead™

  • As AI systems become more complex and integral to various industries, the demand for XAI will grow, making it a strategic priority for CEOs across sectors. The evolution of XAI ensures accountability in AI decision-making, signaling a shift toward responsible and ethical AI use.
  • The integration of XAI across industries offers significant opportunities for enhancing trust and compliance, making it essential for organizations to invest in XAI to gain a competitive edge.
  • Successful implementation of XAI requires a strategic approach that includes skilled personnel and change management, making it a key consideration for future organizational planning across diverse fields.
  • The global expansion of XAI technology, estimated to be worth $21 billion by 2030, highlights the bridging of the gap between explainability and trust, leading to more effective and transparent decision-making across industries.

Strategic Priority Across Sectors

XAI is a subfield of AI that focuses on creating AI models that can explain their decision-making processes in a way that humans can understand. It involves the use of specific statistical tools such as feature importance, partial dependence plots, and counterfactual explanations, all of which can provide insights into why an AI model made a particular decision. XAI demystifies AI decisions, making them understandable and fostering trust, which is crucial in sectors where AI decisions can significantly affect businesses and individuals. These statistical tools, collectively referred to as an “explainability layer,” are integrated into existing trained models to gain insights into why and how precisely a particular recommendation from the AI algorithm is the one that minimizes the loss function.

The application of XAI spans various sectors, including:

The growing importance of XAI reflects a broader societal demand for accountability and ethics in AI. As AI systems become more prevalent in critical decision-making processes, the need for clear explanations of how and why decisions are made becomes paramount. This understanding not only fosters trust among users and stakeholders but also facilitates regulatory compliance and ethical alignment.

Enhancing Trust and Compliance

Integrating AI into various industries comes with unique challenges and opportunities. The need for transparency in AI decision-making and compliance with regulations is universal, and XAI offers a solution to these challenges.

People use what they understand and trust. This is especially true of AI. The businesses that make it easy to show how their AI insights and recommendations are derived will come out ahead, not only with their organization’s AI users, but also with regulators and consumers—and in terms of their bottom lines. - Liz Grennan, associate partner at McKinsey1

Opportunities

Risk management: XAI’s ability to transform risk assessment is evident across sectors. Traditional AI models, often seen as “black boxes,” are now being augmented with explainability layers. This transparency allows for more accurate risk evaluation, from credit scoring in banking to failure prediction in machine manufacturing, leading to more informed and accountable decision-making. For instance, for the banking sector, an AI model might predict that a borrower is likely to default based on certain patterns in the borrower’s loan history. But without XAI, it’s unclear why the model made this prediction. With XAI, the model can explain, for example, that the borrower has a history of late payments, a common indicator of default risk. This explanation not only makes the prediction more understandable but also more trustworthy.2

Regulatory compliance: XAI facilitates compliance with regulations by making AI decisions understandable and transparent. In health care, it ensures adherence to patient privacy laws; in finance, it aligns with anti-money-laundering regulations;3 in manufacturing, it helps meet quality standards. By aligning AI models with legal requirements, XAI fosters trust between institutions and their stakeholders, making it an essential tool for governance and accountability.

Causality:Uncovering cause-effect relationships in data is vital for deeper insights and better decision-making. In pharmaceuticals, it can help professionals understand drug interactions; in retail, it can reveal purchasing patterns. XAI’s ability to elucidate these relationships leads to more targeted strategies and interventions across industries, enhancing efficiency and effectiveness.

Transferability: XAI’s transferability is a valuable asset. Insights gained from one application can be applied to others, increasing return on investment. For example, an XAI model developed for fraud detection in one sector might be adapted for another, leveraging existing knowledge and reducing development costs. This cross-application potential makes XAI a versatile and valuable tool across domains.

Fairness: Ensuring that AI models are not producing unfair or unethical results is paramount. XAI provides insights into potential biases in data or algorithms, allowing for corrections and alignment with ethical standards. Whether in hiring practices or loan approvals, fairness in AI enhances reputation and reduces legal and regulatory risks across sectors.

Accessibility and interactivity: XAI’s ability to improve user acceptance and satisfaction is vital in industries where customer trust is paramount. Involving end users in the AI modeling process and increasing their interaction with XAI models enhances understanding and confidence. Whether in personalized marketing or patient-centered health care, this accessibility and interactivity foster a more engaged and trusting relationship between AI systems and their users.

Challenges

Implementing XAI involves developing models, interacting with various stakeholders and outside vendors, and following governance procedures. These implementation challenges may vary from sector to sector with, for example, transportation focusing on route optimization and traffic management, health care on diagnostics and patient care, and manufacturing on process optimization and quality control. However, with the support of an experienced software development team, these challenges can be overcome, leading to a quality AI-powered solution that is transparent, explainable, and aligned with ethical and legal standards.

Strategic Implementation of XAI in Your Organization

Implementing XAI in your organization is not just a technical task; it’s a strategic initiative that can drive significant business results. Here’s a road map for successful implementation:

  1. Model development: Develop AI and machine learning models that provide not only accurate predictions but also clear explanations for their decisions. These models will enhance decision-making and reduce risks.
  2. Stakeholder engagement: Engage with various stakeholders, including data scientists, business users, and regulators, to understand their needs and expectations. This engagement will ensure that your XAI models are aligned with your business objectives and regulatory requirements.
  3. Governance procedures: Establish clear governance procedures for using XAI: define who is responsible for the models, how they are used, and how their performance is monitored. Clear governance will ensure accountability and control, reducing potential risks.
  4. Vendor involvement: Consider partnering with vendors who specialize in XAI. They can provide valuable expertise and resources, helping you implement XAI more effectively and efficiently.

Successfully Bridging the Explainability-Trust Gap

Finance sector

An example of the application of XAI in finance comes from a recent study that focused on green supply chain finance, which involves sustainable environmental processes built into conventional supply chains—from manufacturing to operations.4  The researchers developed a platform that combined blockchain technology with a Wide & Deep AI algorithm, a recommendation model originated from Google Play, to calculate the green rating of enterprises. This model took into account factors such as environmental investment resources, environmental penalty data, green patent acquisition, and green development data.

The Wide & Deep algorithm, pioneered by Google, is a sophisticated tool used for making recommendations. Imagine it as a team where one member (the "wide" part) is excellent at spotting specific patterns based on past experiences, while the other member (the "deep" part) is great at making connections and predicting future trends. Together, they provide recommendations that are not only accurate but also tailored to individual preferences. This technology is particularly useful in finance, where it can help provide personalized financial advice or investment recommendations.

The Wide & Deep model was used to predict green default risk; that is, the likelihood of default on a “green” loan or investment, which is a type of financing specifically designed to support projects and initiatives that are environmentally friendly or sustainable. The model achieved an impressive accuracy rate of 92.88 percent. This application of XAI provided a clear understanding of how the model made its predictions, which is crucial for building trust and facilitating decision-making in finance.

This case study demonstrates how XAI can be used to address complex issues in finance. In this instance, it helped solve problems such as the lack of unified green standards, ineffective information disclosure and sharing mechanisms, and financing difficulties faced by small and medium-sized institutions in the green supply chain.

The combination of AI and blockchain technology in this context not only improved the efficiency and effectiveness of green supply chain finance but also provided a promising direction for the future development of supply chain finance. This real-world application of XAI in finance underscores the transformative potential of this technology in the sector.

Talent management sector

Background: Beamery, a leader in Talent Lifecycle Management, recognized the need to ensure that its AI-driven recruiting tools were free from bias and aligned with global compliance regulations.5 With the newly enacted Automated Employment Decision Tools (AEDT) law in New York City and other global regulatory frameworks, the company took proactive steps to conduct a third-party audit for bias in its AI capabilities.

The challenge: The challenge lay in creating AI models that not only facilitated unbiased hiring but were also transparent, ethical, and compliant with upcoming regulatory standards. The goal was to provide equal access to work and reduce unconscious bias in the hiring process.

The solution: Beamery partnered with Parity, a company focused on ending algorithmic inequality, to conduct a rigorous audit of its AI models. The audit was designed to assess bias and ensure compliance with global data privacy regulations. In parallel, Beamery released its AI explainability statement to openly share its AI processes and principles with customers, candidates, and employees.

The outcome: The AI audit successfully demonstrated Beamery’s commitment to creating transparent and ethical AI models. By articulating the mix and weight of skills, seniority, proficiency, and industry relevance in its recommendations, Beamery’s AI provided clear explanations for its predictions. This transparency allowed users to understand what skills affected the recommendations and ensured that the AI models were explainable and compliant with legal standards.

The impact: Beamery’s efforts set a benchmark in the HR industry for AI that is explainable, transparent, ethical, and compliant. The company’s AI explainability statement and third-party audit information serve as a guide for other organizations on how to approach the ethical, legal, technological, and human considerations of using AI in HR.

Conclusion

Looking ahead, despite some limitations, XAI technology is seeing exponential growth across various industries, including fintech, health care, and manufacturing. A report by Next Move Strategy Consulting6 estimates that the worldwide XAI market will be worth $21 billion in 2030, with a compound annual growth rate of 18.4 percent from 2022 to 2030.

AI contributes a massive £3.7 billion to the UK economy so its responsible growth is a big focus area and important to keep the country as a global hub of innovation. - Joy Dasgupta, CEO at GyanAI7

As more organizations embrace AI, XAI is bridging the gap between explainability and trust. In the fintech sector, companies can now comprehend every form of data-driven decision-making, leading to more effective and transparent financial risk management. In health care, XAI is enabling more accurate diagnostics, personalized treatment plans, and transparent patient care. In transportation, XAI is contributing to route optimization, traffic management, and predictive maintenance, ensuring efficiency and reliability.

But it’s not just about the present; the future across various sectors is being shaped by XAI. As AI systems continue to evolve, the role of XAI is set to expand. We’re likely to see XAI being used in more innovative ways, from personalized financial advice to advanced fraud detection in finance, enhanced patient outcomes in health care, and intelligent automation in manufacturing.


 

[1] Liz Grennan at al., Why Businesses Need Explainable AI and How to Deliver It, QuantumBlack, AI by McKinsey, September 29, 2022.

[2] Anastasiia Molodoria, Explainable AI in Financial Risk Management, Fintech News, August 22, 2022.

[3] Patrick Weber et al., Applications of Explainable Artificial Intelligence in Finance—A Systematic Review of Finance, Information Systems, and Computer Science Literature, Management Review Quarterly (2023).

[4] Hengyi Zhou, Green Supply Chain Financial Governance and Technology Application Based on Block Chain and AI Technology, BCP Business & Management 38 (2023).

[5] Beamery, Beamery Completes AI Audit for Bias, November 1, 2022.

[6] Next Move Strategy Consulting, Explainable AI (XAI) Market – Global Opportunity Analysis and Industry Forecast, 2022-2030, February 2022.

[7] Joy Dasgupta, The Potential Impact of Explainable AI in Finance, Finance Derivative, June 19, 2023.

Author

Other Related Resources

hubCircleImage