Skip to Main Content

AI’s Explainability Problem Explained

By now, you’ve likely tried out some form of generative AI. Maybe it’s become a part of your daily routine. But can you explain how it works? 

That’s the question many firms—including banks, whose regulated financial models may increasingly deploy generative AI—are grappling with. AI has evolved from a mere concept to a potentially transformative force driving operational efficiency and innovation. But, as banks increasingly embrace advanced AI technologies, a critical governance challenge emerges. 

A new RMA Journal article by Liming Brotcke, senior director at Ally, explores the evolving landscape of AI in banking, focusing on the challenges surrounding explainability in the governance of AI models. 

Brotcke divides the history of AI in banking into three phases: 

Proto AI Period (1970s to mid-2010s): During this phase, AI applications were primarily rule-based expert systems and simple regression models, which were relatively easy to explain due to their human-programmed rules and straightforward statistical properties. 

Surge of ML Algorithms (Mid-2010s to November 2022): This period saw the rise of more complex machine learning algorithms, such as deep learning models. While these models offered superior predictive capabilities, they posed challenges in explainability due to their non-linearity and complexity. 

Dawn of Generative AI (Post-November 2022): The emergence of generative AI, the technology whose considerable powers became apparent to all with the introduction of ChatGPT, presented unprecedented challenges in explainability. These models, trained on vast amounts of textual data, operate differently from traditional numerical models, making them inherently difficult to understand and explain. 

Brotcke highlights the key governance implications of AI explainability across these phases, emphasizing the importance of balancing the need for explainability with other risks such as data privacy, cybersecurity, fairness, and compliance. She suggests that while explaining AI models remains crucial, it must be integrated into a comprehensive risk assessment framework that accommodates the increasing sophistication of AI technologies. 

The article underscores the necessity for banks to develop robust AI governance frameworks that address the challenges of explainability while fostering innovation and maintaining compliance with regulatory standards. 

To put it in Spider-Man terms: “With great sophistication comes great explainability requirements.”