Scott Zoldi, Chief Analytics Officer at FICO, explains why there should be no AI alone in decision making processes

Many AI models are black boxes and developed without proper consideration for interpretability, ethics, or safety of outputs. To establish trust, organisations should leverage Responsible AI. This defines standards of robust AI, explainable AI, ethical AI, and auditable AI. Under Responsible AI, developers define the conditions that lead to some transactions having less human oversight and others having more. But can we take people out of the decision-making loop entirely? To answer that question, let’s look at some developments in Responsible AI.

Trust in Developing AI Models

One best practice that organisations can adopt is maintaining a corporate AI model development standard. This dictates appropriate AI algorithms and processes to enable roles that keep people in the loop. This will often include the use of interpretable AI, allowing humans to review and understand what AI has learned for palatability, bias, ethical use and safety. Auditable AI will then codify the human-in-the-loop decisions and monitoring guidelines for operational use of the AI.

Responsible AI codifies all the essential human decisions that guide how AI will be built, used and progressed. This includes approving or declining the use of data, removing unethical relationships in data (i.e., illegal or unethical data proxies), and ensuring governance and regulation standards are met. Responsible AI leverages an immutable blockchain that dictates how to monitor the AI in operation. And the decision authority of human operators, which can include conditions where AI decisions are overruled, and operations move to a ‘humble AI model.’ AI Practitioners are keenly aware that even the highest performing AI models generate large number of false positives. So, every output needs to be treated with care and strategies defined to validate, counter, and support the AI.

A Responsible AI framework

There should be a well-defined process to overrule or reverse AI-driven decisions. If built in a Responsible AI framework, these decisions are codified into a crystal-clear set of operating AI blockchain frameworks well before the AI is in production. When there is a crisis you need clear preset guidance, not panicked decision making. This blockchain will define when humans can overrule the AI through alternate models, supporting data, or investigative processes. This AI operating framework is defined in coordination with the model developers, who understand the strengths and weaknesses of the AI. And when it may be operating in ways it wasn’t designed, ensuring there is no gap between development and operation. When auditable AI is employed, there are no nail-biting decisions in times of crisis. You can rely on a framework that pre-defines steps to make these human-driven decisions.

Companies that utilise Responsible AI frameworks enforce usage adherence by auditable AI, which is the operating manual and monitoring system. Embracing Responsible AI standards can help business units attain huge value. At the same time they can appropriately define the criteria where the businesses balance business risks and regulation. Domain experts/analysts will be given a defined span of control on how to use their domain knowledge and the auditable AI will monitor the system to alert and circumvent AI as appropriate.

Drawback prevention begins with transparency

To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims to honest discussions of the risks of this technology. We must define how involved humans need to be. Companies need to empower their data science leadership to define what is high-risk AI, and how they are prepared or not to meet responsible/trustworthy AI. This comes back to governance and AI regulation. Companies must focus on developing a Responsible AI programme, and boost practices that may have atrophied during the GenAI hype cycle. 

They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re not prepared, they need to understand the business impacts of potentially having AI pulled from their repository of tools. And get prepared by defining AI development/operational corporate standards. 

Companies should then determine and classify business problems best suited for traditional AI vs. generative AI. Traditional AI can be constructed and constrained to meet regulation using the right algorithms to meet business objectives. Finally, companies will want to adopt a humble AI approach to have hot backups for their AI deployments. And to tier down to safer tech when auditable AI indicates AI decisioning is not trustworthy.

The vital role of the Data Scientist

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. Building AI that is responsible and safe and properly operationalised with controls is a much harder exercise requiring standards, maturity and commitment to responsible AI. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes. In a nutshell: AI + human is the strongest solution. There should be no AI alone in decision-making.

  • Artificial Intelligence in FinTech
  • Blockchain

FICO’s use of Blockchain for AI model governance wins Tech of the Future: Blockchain and Tokenisation award

Global analytics software leader FICO has won the Tech of the Future – Blockchain and Tokenisation award. The Banking Tech Awards in London recognised FICO for its innovative work using Blockchain technology for AI model governance. FICO’s use of blockchain to advance responsible AI is the first time blockchain has been used to track end-to-end provenance of a machine learning model. This approach can help meet responsible AI and regulatory requirements.

More information: https://www.fico.com/blogs/how-use-blockchain-build-responsible-ai-award-winning-approach-0

FICO: Blockchain Innovation

FICO’s AI Innovation and Development team has developed and patented an immutable blockchain ledger. It tracks end-to-end provenance of the development, operationalisation and monitoring of machine learning models. The technology enforces the use of a corporate-wide responsible AI model development standard by organisations. It demonstrates adherence to the standard with specific requirements, people, results, testing, approvals and revisions. In addition to the Banking Tech award, Global Finance recognised FICO’s blockchain for AI technology with The Innovators award last year.

Responsible AI

“The rapid growth of AI use has made Responsible AI an imperative,” commented Dr. Scott Zoldi, chief analytics officer at FICO. “FICO is focused on technologies that ensure AI is used in an ethical way, and governance is absolutely critical. We are proud to receive another award for our groundbreaking work in this area.”

FICO is well-known as a leader in AI for financial services. Its FICO® Falcon® Fraud Manager solution, launched in 1992, was the first fraud solution to use neural networks. Today it manages some four billion payment cards worldwide. FICO has built advanced analytics capabilities into FICO® Platform, an applied intelligence platform for building decision management solutions.

See the full list of Banking Tech Award winners for 2024.

  • Artificial Intelligence in FinTech
  • Blockchain

Scott Zoldi, Chief Analytics Officer at FICO considers whether the current AI bubble is set to burst, the potential repercussions of such an event, and how businesses can prepare

Since artificial intelligence emerged more than fifty years ago, it has experienced cycles of peaks and troughs. Periods of hype, quickly followed by unmet expectations that lead to bleak periods of AI-winter as users and investment pull back. We are currently in the biggest period of hype yet. Does that mean we are setting ourselves up for the biggest, most catastrophic fall to date?

AI drawback

There is a significant chance of such a drawback occurring in the near future. So, the growing number of businesses relying on AI must take steps to prepare and mitigate the impact a drawback or complete collapse could have. Research from Lloyds recently found adoption has doubled in the last year, with 63% of firms now investing in AI, compared to 32% in 2023. In addition, the same study found 81% of financial institutions now view it as a business opportunity, up from 56% in 2023.

This hype has led organisations to explore AI use for the first time. Often with little understanding of the algorithms’ core limitations. According to Gartner, in 2023 less than 10% of organisations were capable of operationalising AI to enable meaningful execution. This could be leading to the ‘unmet expectations’ stage of the damaging hype/drawback cycle. The all-encompassing FOMO of repeating the narrative of the incredible value of AI does not align with organisations’ ability to scale, manage huge risks, or derive real sustained business value.

Regulatory pressures for AI

There has been a lack of trust in AI by consumers and businsses alike. It has resulted in new AI regulations specifying strong responsibility and transparency requirements for applications. The vast majority of organisations are unable to meet these in traditional AI, let alone newer GenAI applications. Large language models (LLMs) were prematurely released to the public. The resulting succession of fails fuelled substantial pressure on companies to pull back from using such solutions other than for internal applications. It has been reported that 60% of banking businesses are actively limiting AI usage. This shows that the drawback has already begun. Organisations that have gone all-in on GenAI – especially those early adopters – will be the ones to pull back the most, and the fastest.

In financial services, where AI use has matured over decades, analytic technologies exist today that can withstand regulatory scrutiny. Forward-looking companies are ensuring they are prepared. They are moving to interpretable AI and backup traditional analytics on hand while they explore newer technologies with appropriate caution. This is in line with proper business accountability, vs the ‘build fast, break it’, mentality of the hype spinners.

Customer trust with AI

Customer trust has been violated by repeated failures in AI, and a lack of businesses taking customer safety seriously. A pull-back will assuage inherent mistrust in companies’ use of artificial intelligence in customer facing applications and repeated harmful outcomes.

Businesses who want their AI usage to survive the impending winter need to establish corporate standards for building safe, transparent, trustworthy Responsible AI models that focus on the tenets of robust, interpretable, ethical and auditable AI. Concurrently, these practices will demonstrate that regulations are being adhered to – and that their customers can trust AI. Organisations will move from the constant broadcast of a dizzying array of possible applications, to a few well-structured, accountable and meaningful applications that provide value to consumers, built responsibly. Regulation will be the catalyst.

Preparing for the worst

Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of algorithm mathematics and the very signifiicant risk in using the technology.

Stringing together AI is easy. Building AI that is responsible and safe is a much harder and exhausting exercise requiring model development and deployment corporate standards. Businesses need to start now to define standards for adopting the right types of AI for appropriate business applications, meet regulatory compliances, and achieve optimal consumer outcomes.

Companies need to show true data science leadership by developing a Responsible AI programme or boosting practices that have atrophied during the GenAI hype cycle which for many threw standards to the wind. They should start with a review of how regulation is developing, and whether they have the standards, data science staff and algorithm experience to appropriately address and pressure-test their applications and to establish trust in AI usage. If they’re not prepared, they need to understand the business impacts of potentially having artificial intelligence pulled from their repository of tools.

Next, these companies must determine where to use traditional AI and where they use GenAI, and ensure this is not driven by marketing narrative but meeting both regulation and real business objectives safely. Finally, companies will want to adopt a humble approach to back up their deployments, to tier down to safer tech when the model indicates its decisioning is not trustworthy.

Now is the time to go beyond aspirational and boastful claims, to have honest discussions around the risks of this technology, and to define what mature and immature AI look like. This will help prevent a major drawback.

  • Artificial Intelligence in FinTech