Many AI models are black boxes and developed without proper consideration for interpretability, ethics, or safety of outputs. To establish trust, organisations should leverage Responsible AI. This defines standards of robust AI, explainable AI, ethical AI, and auditable AI. Under Responsible AI, developers define the conditions that lead to some transactions having less human oversight and others having more. But can we take people out of the decision-making loop entirely? To answer that question, let’s look at some developments in Responsible AI.
Trust in Developing AI Models
One best practice that organisations can adopt is maintaining a corporate AI model development standard. This dictates appropriate AI algorithms and processes to enable roles that keep people in the loop. This will often include the use of interpretable AI, allowing humans to review and understand what AI has learned for palatability, bias, ethical use and safety. Auditable AI will then codify the human-in-the-loop decisions and monitoring guidelines for operational use of the AI.
Responsible AI codifies all the essential human decisions that guide how AI will be built, used and progressed. This includes approving or declining the use of data, removing unethical relationships in data (i.e., illegal or unethical data proxies), and ensuring governance and regulation standards are met. Responsible AI leverages an immutable blockchain that dictates how to monitor the AI in operation. And the decision authority of human operators, which can include conditions where AI decisions are overruled, and operations move to a ‘humble AI model.’ AI Practitioners are keenly aware that even the highest performing AI models generate large number of false positives. So, every output needs to be treated with care and strategies defined to validate, counter, and support the AI.
A Responsible AI framework
There should be a well-defined process to overrule or reverse AI-driven decisions. If built in a Responsible AI framework, these decisions are codified into a crystal-clear set of operating AI blockchain frameworks well before the AI is in production. When there is a crisis you need clear preset guidance, not panicked decision making. This blockchain will define when humans can overrule the AI through alternate models, supporting data, or investigative processes. This AI operating framework is defined in coordination with the model developers, who understand the strengths and weaknesses of the AI. And when it may be operating in ways it wasn’t designed, ensuring there is no gap between development and operation. When auditable AI is employed, there are no nail-biting decisions in times of crisis. You can rely on a framework that pre-defines steps to make these human-driven decisions.
Companies that utilise Responsible AI frameworks enforce usage adherence by auditable AI, which is the operating manual and monitoring system. Embracing Responsible AI standards can help business units attain huge value. At the same time they can appropriately define the criteria where the businesses balance business risks and regulation. Domain experts/analysts will be given a defined span of control on how to use their domain knowledge and the auditable AI will monitor the system to alert and circumvent AI as appropriate.
Drawback prevention begins with transparency
To prevent major pull-back in AI today, we must go beyond aspirational and boastful claims to honest discussions of the risks of this technology. We must define how involved humans need to be. Companies need to empower their data science leadership to define what is high-risk AI, and how they are prepared or not to meet responsible/trustworthy AI. This comes back to governance and AI regulation. Companies must focus on developing a Responsible AI programme, and boost practices that may have atrophied during the GenAI hype cycle.
They should start with a review of how AI regulation is developing, and whether they have the tools to appropriately address and pressure-test their AI applications. If they’re not prepared, they need to understand the business impacts of potentially having AI pulled from their repository of tools. And get prepared by defining AI development/operational corporate standards.
Companies should then determine and classify business problems best suited for traditional AI vs. generative AI. Traditional AI can be constructed and constrained to meet regulation using the right algorithms to meet business objectives. Finally, companies will want to adopt a humble AI approach to have hot backups for their AI deployments. And to tier down to safer tech when auditable AI indicates AI decisioning is not trustworthy.
The vital role of the Data Scientist
Too many organisations are driving AI strategy through business owners or software engineers who often have limited to no knowledge of the specifics of AI algorithms’ mathematics and risks. Stringing together AI is easy. Building AI that is responsible and safe and properly operationalised with controls is a much harder exercise requiring standards, maturity and commitment to responsible AI. Data scientists can help businesses find the right paths to adopt the right types of AI for different business applications, regulatory compliances, and optimal consumer outcomes. In a nutshell: AI + human is the strongest solution. There should be no AI alone in decision-making.
- Artificial Intelligence in FinTech
- Blockchain