Skip to Main Content

Generative AI: Balancing Potential and Pitfalls

Ny Panel Generative Ai Web Image 1168X660

Artificial intelligence is evolving fast, and one particular aspect has captured the imagination of industry leaders and innovators alike: generative AI. This groundbreaking technology, made popular by platforms such as ChatGPT, promises to revolutionize knowledge management, information retrieval, and decision-making processes across a spectrum of industries—with banking near the top of the list. 

But as organizations venture into this uncharted territory, they must grapple with a host of challenges while embracing the game-changing potential that generative AI offers. 

“Generative AI is here to stay—it is transformative,” Vikas Agarwal, partner and financial services risk and regulatory leader at PwC, said during a recent RMA New York panel discussion. 

Agarwal believes the technology holds the key to unlocking unprecedented efficiencies and insights. “This technology is very easy for people to use. That’s very different than when we think about [other forms of] AI,” Agarwal added, saying use of generative AI can offer substantial time savings, especially in back-office functions. 

But Agarwal recommends companies tread carefully and deliberately, suggesting a “crawl, walk, run” adoption of generative AI that allows for thoughtful navigation of the technology’s complexities. Small pilots can help organizations start using generative AI without large commitments, he said. 

Amid the excitement, legal and privacy concerns loom large. The challenge, Agarwal said, is “being able to understand what data you own, and potentially what data you don’t.” 

Jie Chen, head of decision science and artificial intelligence model validation at Wells Fargo, agreed, saying “generative AI has huge potential to bring benefits to a company,” but risks must be carefully managed. The complexity of generative AI applications requires robust controls to mitigate risks associated with data ownership, contractual obligations, and unintended model uses, she said. 

For Chen, the key lies in developing comprehensive controls that address the diverse applications of generative AI. Controls should address users, specific applications, and output monitoring, she explained. “This is very important from the model use perspective,” she said, adding that model risk management is critical given the variability in generative AI applications. 

Despite the inherent risks, Chen remains optimistic about the potential of generative AI—when approached with caution. Experimentation while maintaining risk principles is important, she said. 

This sentiment was echoed by Ryan Carrier, founder of ForHumanity, a nonprofit dedicated to examining the downside risks associated with the advance of AI and automation.  

Carrier emphasized the importance of transparency and accountability in AI adoption, advocating for the establishment of an audit framework to standardize oversight of AI systems, particularly in high-risk areas such as ethics, bias, privacy, trust, and cybersecurity. There is an ethical imperative, he stressed, to ensure fairness, transparency, and trustworthiness in AI applications. Early risk disclosure could balance fast adoption pressures with residual risk awareness, he suggested. 

Carrier proposes annual independent audits for high-risk AI systems and certification schemes to harmonize compliance across jurisdictions globally. “I get asked by policymakers all the time, ‘What’s the one thing ForHumanity wants in policy? We say annual audits for all ‘triple A’ systems—that is AI algorithmic autonomous systems—that impact humans in a meaningful way,” Carrier said, emphasizing the need for rigorous oversight. 

Beth Dugan, deputy comptroller for large bank supervision at the Office of the Comptroller of the Currency (OCC), said her regulatory body is “technology agnostic,” but expects organizations to “apply appropriate risk management” and “use this technology in a safe and sound manner.” 

Dugan emphasized the importance of a financial institution’s governance in ensuring compliance with existing regulations while navigating the complexities of generative AI. While not designed specifically with AI in mind, Dugan noted rules already on the books that address privacy, discrimination, fair access, and safe operations—and AI intersects with all of them. 

Generative AI represents a paradigm shift in the way organizations manage data, make decisions, and drive innovation. As organizations navigate this new frontier, they must strike a delicate balance between harnessing its transformative potential and mitigating its inherent risks. 

PwC’s Agarwal acknowledged a lot of trepidation about “unknown unknowns” about generative AI, especially when it comes to its usage in the banking industry. He emphasized the need for a balanced approach to generative AI adoption, acknowledging the obstacles while charting a course for responsible innovation. 

“Continue to experiment, continue to look around corners, continue to try small things that can start to get your feet wet,” he said. “It’s a really different muscle to start to use generative AI—and people are going to need to get used to it.”