Alexandra Mousavizadeh, Co-Founder & CEO at Evident, on the rise of Agentic AI in financial services

Agentic AI is no longer the preserve of the distant future. Agents are already here, embedded in the day-to-day operations of businesses. As well as answering questions and crunching numbers, they’re making decisions, taking action, and learning on the fly. They can handle customer queries, tap into APIs, and even rewrite their own instructions.

It’s a big shift from traditional AI, which stayed firmly in the realm of prediction and recommendation. Agentic systems are very dynamic in comparison, and involve more acting and doing, which fundamentally changes the risk landscape.

For banks looking to capitalise on agentic, the implications are especially consequential. This is a highly sensitive sector where trust, compliance and control are existential issues. That is why Responsible AI (RAI) has quickly moved from being a nice-to-have to a critical foundation. It can balance the need for controls with the promise of innovation.

In our latest Responsible AI in Banking report at Evident, we found a clear upweighting of RAI priorities. More banks are appointing RAI leads. More are publishing principles. And more are thinking hard about how to scale those capabilities across the business.

But Agentic AI is a different challenge. It pushes past the limits of old governance models and forces a rethink of how we manage risk, maintain oversight, and build trust. 

Here’s why a rethink is needed…

Static Governance Doesn’t Work for Dynamic Systems

Most current AI oversight models are built for systems that behave predictably. They assume models will be trained, validated, deployed, and then monitored using relatively fixed parameters. This is no longer the case.

Agentic AI systems learn and act independently. They are decision-making agents as well as tools. That makes governance more complicated.

Banks need oversight models that can keep pace in real time. That includes enterprise-wide assurance platforms that can help to spot unexpected behaviour, adjust on the fly, and give leaders a clear view of what’s happening across the organisation.

Building the right tooling in this way is essential. What’s harder is laying out an agentic AI strategy and ensuring it’s being applied across teams, with clear direction on where agents will be used and the governance guiding decisions.

Having these failsafes in place is an approach that allows for continued innovation without running an unacceptable level of risk.

We’re Seeing a Regulatory Shift – from Theory to Evidence

AI regulation is morphing over time, moving gradually from high-level principles to concrete requirements that need to be backed up by evidence. The EU AI Act, NIST frameworks and ISO standards all suggest that financial institutions will need to demonstrate not just model performance, but responsible use.

This creates new compliance needs. Banks will need to show how decisions are made, how risks are mitigated, and how safeguards perform under pressure. As one senior executive told us during our research, “AI risk is no longer model risk. It’s also architectural.”

All of this means that keeping reliable documentation and maintaining end-to-end system visibility is becoming a baseline expectation. Banks will need explainability mechanisms that can keep up with increasingly complex AI systems. Pressure for more transparency on agentic AI use and human in the loop is likely to follow too.

Responsible AI is a Strategic Capability

Responsible AI has often been framed as a brake on progress – important for safety and reputation, but ultimately slowing things down. In practice, we’ve seen the opposite. The banks leading the charge on effective AI adoption know that RAI is a strategic enabler. That means that in addition to developing more use cases, scaling faster across business lines and hiring more talent, they are also ahead of the curve when it comes to RAI.

They also earn more trust, whether from customers, regulators or from their own leadership. That trust will grow more important as agentic systems begin to underpin services ranging from credit assessment to wealth management.

In this environment, responsibility is not a constraint. It is a foundation that allows banks to push further with AI, including finding new applications for agentic tools, while keeping risk in check.

____________________________________________________________________________________________________________________________________________________

The banking industry has made huge strides on the road towards AI adoption, and the arrival of Agentic AI – while creating new compliance and safety challenges – is nevertheless an opportunity that the leading AI-first banks will be keen to embrace.

Banks have already made significant investments in AI governance. What Agentic AI does is raise the bar, requiring them to ensure they’re able to demonstrate a deeper institutional understanding of autonomy, intent, and accountability – in essence, what the AI agent is doing and why.

The decisions being made today about AI governance will shape the next generation of financial services. Forward-thinking institutions are already preparing for that future. JPMorgan, Citigroup, Wells Fargo, UBS and Capital One have quietly assembled specialist teams focused on agentic AI. Others are hoping their existing frameworks will stretch far enough.

Opting for the latter approach is a big risk to take. Agentic AI is arriving faster than many expect. The challenges are real and so is the opportunity, but only for those who have already laid the groundwork via an RAI structure that lets them reap the benefits while maintaining trust, transparency and control.

  • Artificial Intelligence in FinTech

We believe in a personal approach

By working closely with our customers at every step of the way we ensure that we capture the dedication, enthusiasm and passion which has driven change within their organisations and inspire others with motivational real-life stories.