Katja Hakoneva, Product Manager at Tuxera, on delivering tomorrow’s data storage security today

Smart meters are no longer just data endpoints. They’re intelligent, connected nodes embedded into the national infrastructure. As energy networks undergo rapid digital transformation, the focus has largely been on secure communications and real-time data transmission. But beneath the surface lies the local data storage, which often becomes a critical blind spot.

Smart meters store large volumes of sensitive data from energy usage profiles to firmware logs and grid event histories on embedded memory. If this information is accessed, altered, or deleted, it can trigger billing inaccuracies, regulatory breaches, and customer mistrust. With meters expected to operate in the field for up to 20 years, data-at-rest security is a critical requirement.

Storage Vulnerabilities: The Silent Cyber Threat

These embedded systems face multifaceted risks. Attackers may gain access to stored data by physically tampering with a meter or exploiting software vulnerabilities that bypass weak authentication. Malicious actors could manipulate logs to alter billing records, mislead consumption analytics, or mask larger cyberattacks on grid infrastructure.

In many cases, such intrusions go undetected until tangible damage, such as lost revenue or reputational fallout. With increasing dependence on smart infrastructure, utilities can no longer afford to treat embedded storage as a passive component.

Counting the Real Costs of Cybersecurity

Securing smart meters comes with technical requirements, as well as, operational and resourcing demands. For many UK manufacturers and utilities, managing cybersecurity internally means building and retaining specialist teams, often requiring three to five full-time professionals to handle vulnerability monitoring, patch management, and threat response throughout the year.

Aligning with regulatory frameworks frequently demands hardware upgrades to handle stronger encryption and secure configurations, impacting Bill of Materials (BOM) costs and development timelines. Many existing software stacks require optimisation to support modern security protocols within resource-constrained devices. These efforts are necessary, with a single undetected cyberattack costing companies an average of $8,851 (≈£6,900) per minute, and the consequences extending beyond financial loss to potential regulatory fines and service disruptions.

The CRA and the new Era of Cyber Regulation

The Cyber Resilience Act (CRA), set to come into force across the EU by 2027, will reshape how connected devices are designed, developed, and supported. For UK-based vendors serving the European market, or collaborating with EU counterparts, compliance with CRA is becoming a strategic imperative.

Key CRA requirements include:

  • Security by design: Devices must be secure from the outset, not retrofitted post-deployment.
  • No known vulnerabilities at market launch: Products must undergo security validation prior to release.
  • Default secure configurations: Devices should avoid insecure settings out of the box.
  • Lifecycle management: Vendors must support patching and vulnerability resolution throughout the device’s operational lifespan.

For smart meters, which often run in the field for two decades or more, the CRA introduces accountability that extends well beyond product launch. Compliance with the CRA will become part of the CE marking process, meaning global manufacturers must align if they wish to sell into the EU energy market.

Engineering Security: Confidentiality, Integrity, and Authenticity

Designing resilient smart meters starts with three pillars:

  • Confidentiality protects sensitive user data from unauthorised access. This includes encrypting both data and encryption keys, restricting user access levels, and securing communication channels.
  • Integrity ensures stored data remains unaltered and trustworthy. Power failures, for instance, can corrupt memory. Using flash-optimised file systems and secure boot processes can prevent such vulnerabilities.
  • Authenticity confirms that firmware and data updates come from trusted sources. Techniques like digital signatures and update validation prevent attackers from injecting malicious code into meters.

Together, these pillars enable smart meters to meet regulatory expectations while protecting both users and grid operations.

Future-proofing Data Storage

Cybersecurity for smart meters is not just a feature; it requires organisational readiness. Frameworks like the CRA, NIST, and IEC 62443 emphasise secure processes, documentation, and people alongside secure products.

For companies looking to prepare, it is smart to start with common pillars such as maintaining up-to-date Software Bills of Materials (SBOMs), conducting regular supply chain and risk assessments, keeping detailed test reports, and establishing clear incident response plans. Internally, training staff on cybersecurity best practices, setting clear data retention policies, and defining access controls and responsibilities are critical steps to ensure cybersecurity is embedded within the culture of the organisation. This approach ensures security is not a one-off compliance task but a sustainable practice that protects smart infrastructure long-term.

Smart meters deployed today could still be operating in the 2040s. This timeline intersects with the anticipated emergence of quantum computing, which may break today’s encryption standards. Though post-quantum cryptography is still evolving, vendors must prepare now to ensure systems remain secure in a post-quantum world. Smart meter software should be designed with cryptographic agility to allow it to adapt and upgrade algorithms as threats evolve.

Lessons from Long-Term Deployment

Smart meters are designed for longevity, but memory wear remains a primary failure point. Meters that lack flash-aware storage systems face early data loss, increasing the cost of maintenance, replacements, and warranty claims.

Utilities and OEMs that embed file systems capable of wear levelling, garbage collection, and secure boot processes have extended meter lifespans by more than 50%, even in challenging conditions. One example showed meters surviving over 15,000 power interruptions without any data loss.

Integrating secure storage delivers operational and commercial benefits. It ensures compliance with CRA and other evolving global frameworks, reduces maintenance and warranty costs, minimises carbon impact through fewer replacements, enhances brand credibility and trust with procurement teams, strengthens the business case for longer-term contracts and partnerships. As the smart energy market matures, these benefits are becoming differentiators, especially as digital infrastructure grows in complexity.

Delivering Tomorrow’s Data Storage Security Today

The next generation of smart infrastructure will be fast and connected, as well as, secure, resilient, and regulation-ready. For vendors and utilities alike, embedding data protection deep into the meter architecture is a business-critical move.

By preparing for the CRA today, smart meter manufacturers will position themselves as forward-thinking, trustworthy partners in tomorrow’s energy ecosystem, delivering technology that’s not only built to last but built to protect today and tomorrow.

Learn more at tuxera.com

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

AI can transform businesses, but is it also opening the door to cyber risks? Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

The AI Boom

AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

So, it’s no surprise that ANS research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

The Governance Gap

While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

The Need for Responsible AI Adoption

To build resilience while embracing AI, businesses need a dual approach: 

1. Prioritise AI-specific training across the workforce

Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

A well-trained workforce is the first and most crucial line of defence.

2. Adopt open-source AI responsibly

Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

Securing the Future of AI

AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

Learn more at ans.co.uk

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Ben Francis, Insurance Lead at Risk Ledger, on navigating cyber threats by reinforcing security from the inside out

Cyber insurance has evolved from a straightforward risk transfer mechanism into an integral component of enterprise risk strategy. As a result, the conversation has shifted beyond simply securing coverage to embracing three foundational elements: transparency in risk exposure, accountability for security measures, and active collaboration throughout the digital ecosystem.

Rather than asking ‘are you covered?’, the more pertinent question has become ‘can you demonstrate measurable risk reduction?’. Insurers and insureds alike are recognising that what matters now is how well an organisation understands and manages its digital exposure, especially across its extended supply chain. Recent data reveals that 46% of organisations experienced at least two separate supply chain-related cyber incidents in the past year, a clear sign that exposure often lies beyond direct control. 

From Risk Transfer to Risk Visibility 

In recent years, the cyber insurance market has matured significantly. Once viewed as a reactive safety net to cushion the financial impact of attacks, it is now becoming a proactive tool for managing and mitigating risk. This shift is partly driven by insurers, who increasingly expect and work with organisations to demonstrate strong security practices and a nuanced understanding of their threat landscape, including risks deep within their digital supply chains; an area where many businesses still fall short.

At the same time, the industry faces a growing challenge from systemic cyber risk within their portfolios, as many businesses rely on the same cloud providers, payment systems and digital platforms, increasing the chance of a single point of failure. Insurers must gain visibility into how policyholders are connected, not only to suppliers but to each other. Tools and frameworks that map and monitor these interconnections will be essential to avoid underestimating the wider impact of seemingly isolated cyber events.

Mapping Beyond Third Parties

It is no secret that cyber attackers often target the weakest link in a supply chain. These are not always direct suppliers, but fourth, fifth or even sixth-tier vendors that have indirect but critical access to systems and data. Unfortunately, many organisations lack visibility beyond their first tier, creating blind spots that attackers can easily exploit. From an insurance perspective, this presents a clear challenge. If an organisation cannot account for who it is connected to, it cannot adequately quantify its risk and neither can its insurer. Mapping these extended connections is more than just a technical exercise; it means actively practiced risk governance and responsibility. Insurers increasingly want to know how their policyholders are identifying and managing indirect dependencies, particularly in sectors like financial services and retail where disruption can ripple across entire markets.

Collaboration as a Risk Strategy 

One of the more underappreciated aspects of cyber resilience is the role of peer collaboration. Unlike physical incidents, cyber threats rarely exist in isolation. A single compromised vendor can impact multiple organisations simultaneously, a fact that has been highlighted by high-profile supply chain attacks such as SolarWinds and MOVEit

As a result, businesses need to think beyond their own perimeters and adopt a more collective mindset. This includes building relationships with industry peers, sharing threat intelligence and participating in sector-wide initiatives aimed at improving visibility and preparedness. 

In highly regulated sectors, such as insurance, this collaboration is increasingly being encouraged by oversight bodies. Frameworks like the Digital Operational Resilience Act (DORA) in the EU and initiatives from the Prudential Regulation Authority (PRA) and the Financial Conduct Authority (FCA) in the UK are pushing for more transparency around third-party risk. In this context, openness is no longer optional; it will be a regulatory expectation. 

For insurance providers, greater collaboration between policyholders also means better data on emerging threats and more accurate portfolio management. For businesses, it offers a chance to anticipate vulnerabilities that may not yet have hit their own networks but are affecting others in their industry. 

Proactive Transparency Builds Trust 

Organisations that take a proactive, transparent approach to cyber risk management are more likely to secure cover and potentially favourable terms, not just in terms of premiums, but also in access to additional services such as forensic support, incident response sources and legal counsel. 

Demonstrating a mature cyber posture is not about claiming perfection. No organisation is immune to breaches. What insurers are looking for is evidence of a structured approach: the existence of incident response plans, robust governance, effective supply chain risk management, and above all, an honest view of risk. 

A Shift in Mindset 

Ultimately, our understanding of cyber insurance must keep evolving. It should not be treated as a simple checkbox exercise, but as a collaborative relationship between insurers and the organisations they support – one built on shared insight, clear communication, and a drive for continuous improvement.

The organisations best equipped to navigate today’s threats will be those that prioritise transparency. Not only does it lead to stronger protection, but it also builds a culture of accountability that reinforces security from the inside out.

Learn more at riskledger.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Fintech & Insurtech
  • InsurTech

Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

AI can transform businesses, but is it also opening the door to cybersecurity risks?

Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

The AI Boom

AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

So, it’s no surprise that our research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

The Governance Gap

While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. 

As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

The Need for Responsible AI Adoption with Cybersecurity

To build resilience while embracing AI, businesses need a dual approach: 

1. Prioritise AI-specific training across the workforce

Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

A well-trained workforce is the first and most crucial line of defence.

2. Adopt open-source AI responsibly

Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

Securing the Future of AI

AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

  • Cybersecurity
  • Data & AI

Anna Collard, SVP Content Strategy & Evangelist KnowBe4 – Africa, on leveraging AI-driven cybersecurity systems to fight cybercrime

Artificial Intelligence is no longer just a tool. It is a game-changer in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​.  

In 2025, research shows AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants. They function as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don’t just enhance cybercriminal tactics, they may fundamentally change the cybersecurity battlefield. 

How Cybercriminals Are Weaponising AI: The New Threat Landscape 

AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats. Thus enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes notes, while GenAI has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods. Attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents. Autonomous AI systems are capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. 

Here is a list of common (ab)use cases of AI by cybercriminals:  

AI-Generated Phishing & Social Engineering 

Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages. Without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing. Attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams. These are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime​. 

Deepfake-Enhanced Fraud & Impersonation 

Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.  

Cognitive Attacks  

Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence, the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation. By everaging digital platforms, state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content. They are subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.  

The Security Risks of LLM Adoption 

Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces significant security risks. Especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries. This enables new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.  

Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems. Dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs. Potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications. 

Additionally, bias within LLMs poses another challenge. These models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgements, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment, especially in RAG-powered models, are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. 

When AI Goes Rogue: The Dangers of Autonomous Agents 

With AI systems now capable of self-replication, as demonstrated in a recent study, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously. Particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue. This means robust oversight, security measures, and ethical AI governance essential in mitigating these risks. 

The Future of AI Agents for Automation in Cybercrime 

A more disruptive shift in cybercrime can and will come from AI Agents. These transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use. However, in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms. They can automatically compose and send fake executive requests to employees. And, for example, analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don’t just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. 

How Defenders Can Use AI & AI Agents 

Organisations cannot afford to remain passive in the face of AI-driven threats. Security professionals need to remain abreast of the latest developments. Here are some of the  opportunities in using AI to defend against AI:  

AI-Powered Threat Detection and Response

Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns. These might otherwise go unnoticed. AI can create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour. Making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. 

However, despite the potential of AI-agents, human analysts still remain critical. Their intuition and adaptability are essential for recognising nuanced attack patterns. They can leverage real incident and organisational insights to prioritise resources effectively. 

Automated Phishing and Fraud Prevention

AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve. 

User Education & AI-Powered Security Awareness Training

AI-powered platforms deliver personalised security awareness training. They can simulate AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​. And strengthen their individual susceptibility factors and vulnerabilities.  

Adversarial AI Countermeasures

Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques. For example, deploying deception technologies – such as AI-generated honeypots – to mislead and track attackers. As well as continuously training defensive AI models to recognise and counteract evolving attack patterns. 

Using AI to Fight AI-Driven Misinformation and Scams

AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like those shown by research project Countercloud or O2 Telecoms AI agent “Daisy” show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. 

In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates. And how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency. While at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.  

To stay ahead in this AI-powered digital arms race, organisations should:  

  • Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. 
  • Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. 
  • Deploy AI for proactive cyber defense, including threat intelligence and incident response. 
  • Continuously test your own AI models against adversarial attacks to ensure resilience. 
  • Cybersecurity
  • Data & AI

Mike Puglia, General Manager, Kaseya Cybersecurity Labs, on how the need for regulatory support to better support industries when tackling cybercrime

Cyberattacks keep coming hard and fast, but things are beginning to change. In the past few months, law enforcement has announced arrests of three people in the Marks & Spencer breach, seven members of the hacking group NoName057, five affiliates of Scattered Spider and also disrupted the infrastructure of gangs such as Flax Typhoon, Star Blizzard and others.  

Earlier this year, the UK retail industry felt the pressure. Brands, including Marks & Spencer, Harrods and Co-op – and by proxy, their customers – became victims of the hacking group, Scatter Spider. Other businesses are now on high alert as this wave of security breaches is expected to continue. For as long as bad actors can reap rewards and the risk of consequences remains small, they will keep attacking. Ransomware-as-a-service lowers the bar to entry further, allowing even those without specialised skills to launch successful ransomware campaigns.

Along with the threats, regulatory pressure on businesses is growing. Organisations must be able to prove they have strong security defences in place or risk paying hefty fines for non-compliance. However, this means we are essentially punishing the victim, not the perpetrator. By putting the onus on the victims to protect themselves, we are missing an important truth… Because there is no bullet-proof defence, even the best security strategies will not end cybercrime for good.

It’s Time to Treat Cybercrime as Crime

What the industry needs instead is a change in how we approach cybercrime. Rather than blaming the victims, we must start treating it as the serious criminal activity it is. It is high time we addressed cybercrime’s fundamental drivers. Opportunity, motive and the widespread perception that criminals can still get away without punishment. As is the case with physical crime, it takes a two-pronged approach to curb cybercrime: Prevention – and an effective response.

Those who attempt physical theft, for example, face trials and potentially prison. While we have seen a growing number of cybercriminals arrested in recent months, the truth we are only scratching the surface. In the digital world, everything is accessible from everywhere, all the time. This creates an inherent vulnerability that makes perfect protection impossible. In many cases, it also makes it much harder to track down the offenders and hold them accountable.

The Problem with Cryptocurrency and Jurisdiction

The cybercrime landscape has also undergone a significant transformation. While in the past, hackers were mostly focused on stealing financial data, there has been a dramatic shift towards ransomware. It’s far easier to encrypt an organisation’s data and demand a ransom than finding buyers for stolen credit card info.

This transformation has further accelerated because cryptocurrency allows cyber attackers to be paid in anonymous currency. Anywhere in the world, at any time. Previously, criminals had to physically collect payments or transfer money to traceable bank accounts. Now, they can operate with anonymity whilst easily converting their loot into real euros, pounds and dollars. This means ‘following the money’ is no longer a useful way for law enforcement to track nefarious activity. If we made it impossible for criminals to anonymously convert cryptocurrency into real currency, we could change the risk-reward calculation.

The second key issue with fighting cybercrime is the question of jurisdiction. Many cybercriminals are based in countries where western governments have no recourse. When hackers operate from non-cooperative jurisdictions, it may be impossible to extradite them. And they may find their activities tolerated by their local government or even supported.  As we have seen with the recent arrests – the threat actors were outside of Russia and China – where many attacks come from.

These two factors – anonymous payment systems and safe havens – create an environment where cybercrime can and will continue to flourish. While organisations can do their best to make it harder for criminals to attack, it is foolish to believe individual businesses will be able to solve the cybercrime problem on their own.

Stop Blaming the Victim

So, what needs to happen? First, the victim-blaming approach must change. We simply cannot regulate every business to become an impenetrable fortress. When a person is physically robbed, police respond to investigate the crime and help recover stolen property. With cybercrime, victims face reputational damage, fines and higher insurance premiums. Incidents often raise questions about where the business’ cybersecurity strategy failed, rather than a recognition that a crime has been committed against them.

A first step forward towards solving the cybercrime problem would require governmental and societal recognition that cyberattacks represent crimes against businesses and individuals, not merely failures of those organisations to adequately defend themselves. While many countries have ramped up policing efforts against cybercrime, these are generally underfunded considering the scale of the problem.

Secondly, we need to urgently address the anonymous payment systems that keep fuelling cybercrime. This is not an easy problem to solve, but governments must find better ways to trace and regulate how cryptocurrency is converted into real money.

It is also time we introduced real and severe consequences for cybercriminals. The number one deterrent to any type of crime is fear of being caught and punished. The internet has essentially eliminated this, enabling hackers to operate from nations that turn a blind eye. To address this will require more political pressure on ‘safe harbour’ countries to charge, punish and extradite cybercriminals. Where nations refuse to cooperate, potential sanctions such as restrictions on internet connectivity might force governments to reconsider their tolerance for criminal activities.

Finally, we need to acknowledge that regulations such as GDPR, PCI and NIS have their limits. Despite increasingly complex compliance requirements, cybercrime has continued to grow. While regulations can provide critical and much-needed guidance to businesses, they must be combined with properly funded law enforcement – empowered with tools to bring criminals to justice across jurisdictions.

To truly disrupt the criminal ecosystem, systemic changes are needed. We are starting to see governments give law enforcement the tools they need, but it is very early in that process. Because ultimately, we will not solve the cybercrime problem with defence measures alone.

About Kaseya

At Kaseya, our mission is to empower you to simplify and transform IT and cybersecurity management with innovative platform solutions.

Our Mission:

Since 2000, Kaseya has delivered the technology that IT departments and managed service providers need to reach new heights of success. More than 500,000 IT professionals globally use Kaseya products to manage and secure 300 million devices.

Kaseya’s commitment to our customers goes beyond listening to your needs and puts words into action to deliver innovative solutions that empower your business. But we don’t stop there. Kaseya’s first-of-its-kind Partner First Pledge program shares the risk our partners experience because we know a true partner is with you through the ups and downs of life.

  • Cybersecurity
  • Digital Strategy

Andy Swift, Cyber Security Assurance Technical Director at Six Degrees on

According to AV-TEST, the independent IT security institute, every day sees at least 450,000 new malware variants added to its database. In June this year, for example, cybercriminals are thought to have used malware to steal over 16 billion login credentials across various major platforms in what is thought to have been the largest breach of its kind in history. For security teams, this represents a relentless challenge that demands constant attention and consumes significant resources.

Malware-Free Attacks

As if that wasn’t enough, malware-free attacks are increasingly favoured by cybercriminals as a way to circumvent organisational security. Typically using legitimate programs and tools, these stealth attacks are particularly complex to detect. And they are invisible to most automated security protection options that are available to buy.

With no obvious malware signatures to detect, automated defences are often powerless to respond. And without robust security foundations, even advanced detection tools offer limited protection once an attacker gains a foothold. When that happens, the consequences can be significant.

At the heart of the matter are the limitations of many traditional security tools, which are simply not designed to stop what they cannot see. Malware-free attacks do not rely on external payloads or binaries with known malicious signatures. This renders many automated detection systems, including standard antivirus solutions, effectively useless. As a result, the burden falls elsewhere.

For most organisations, that means having the right expertise in place to recognise unusual behaviour, supported by technologies that can identify behavioural anomalies quickly. Endpoint detection and response (EDR) platforms offer some of these capabilities. But even the most advanced solutions rely on proper configuration and human oversight to be effective. In an ideal world, every business would have round-the-clock monitoring in place, but in reality, very few do.

Challenging Assumptions Around Risk

So, how can organisations fill the gap? When assessing how to protect against malware-free attacks, many organisations begin with the assumption that they will need to buy new tools or licenses. This can form part of a rounded solution. However, leading with this mindset often overlooks a more fundamental and cost-effective question: What can be improved with the tools already in place?

Reviewing existing capabilities should be the first step. For example, most environments already have some level of EDR, behavioural monitoring or identity protection deployed. Yet these are often underutilised or misconfigured. This can result from a lack of understanding around tool capabilities (and limitations), paying for the wrong level of license coverage, and failing to ensure configurations support behavioural analysis rather than just malware scanning. In many cases, even minor adjustments can significantly increase effectiveness without any additional spend.

Cost vs Risk

Organisations should also reconsider how they approach the question of investment. The cost vs risk conversation needs to shift from what they should buy to what they should fix. Even the most expensive detection tools can be rendered ineffective if attackers can exploit basic oversights such as poor configuration, excessive access rights or the absence of multi-factor authentication. In contrast, identifying and addressing these gaps in existing systems is not only more cost-effective but also more impactful in stopping attacks before they gain momentum.

This kind of review process is also an opportunity to identify gaps and prioritise actions that reduce risk without escalating costs. For example, many organisations find that network segmentation, strict privilege controls and enforcing least-access policies can help prevent lateral movement and minimise credential misuse – two of the most common techniques used in malware-free attacks. Putting these capabilities in place are security fundamentals that often determine whether an attack is stopped early or is able to spread.

In this context, a best practice approach matters more than ever. Not as a one-off initiative, but as a continuous effort to close the windows of opportunity that attackers rely on. This includes reducing privilege levels, adopting MFA by default, limiting binary access and educating users on social engineering techniques. All of which are good examples of cost-effective steps that can limit the opportunity for malware-free attacks to take hold. These are not headline-grabbing technologies, but they remain the strongest defence against attacks that thrive on poor hygiene and overlooked gaps.

So, rather than investing in yet another layer of detection, organisations should focus on strengthening what they already have. This approach not only helps avoid unnecessary expense but also delivers a stronger, more sustainable defence posture in an environment where threat actors continue to be extremely effective.

  • Cybersecurity
  • Cybersecurity in FinTech
  • Infrastructure & Cloud

TechEX Europe – Powering the Future of
Enterprise Technology at Amsterdam’s RAI Arena September 24-25

TechEx Europe unites five leading enterprise technology events — AI & Big DataCyber SecurityData CentresDigital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.

From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.

AI & Big Data Expo

The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP

Speakers include:

Cybersecurity & Cloud Expo

The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence,  DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.

Speakers include:

IOT Tech Expo

IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

Speakers include:

Digital Transformation

The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

Speakers include:

Data Center Expo

The Data Centre Expo and conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre. 

Speakers include:

Book your place at TechEx Europe 2025 now!

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Event Newsroom
  • Events
  • Infrastructure & Cloud

Join thousands of data centre industry leaders and innovators at London’s Business Design Centre for three co-located events – DCD>Connect, DCD>Compute and DCD>Investment September 16-17

Data Center Dynamics (DCD) is connecting the data center ecosystem. Secure your pass for three-colocated events covering the entire digital infrastructure ecosystem across two days at London’s Business Design Centre – DCD>Connect, DCD>Compute and DCD>Investment.

DCD Connect

Connecting the data center ecosystem to design, build & operate sustainable data centers for the AI age

Bringing together more than 4,000 senior leaders working on Europe’s largest data center projects. DCD>Connect | London will drive industry collaboration, help you forge new partnerships and identify innovative solutions to your core challenges.

“First class event that presented a wide variety of perspectives and technologies in an engaging and informative forum” – Data Center Project Architect, AWS

DCD Compute

Uniting enterprise and hyperscale leaders driving scalable AI Infrastructure from silicon to software…

New workloads are fundamentally reshaping IT infrastructure, as accelerated hardware innovation is enabling more new workloads. How can you keep up in this rapid cycle of new AI models, new hardware, new software, and the race to be first to market?

The Compute event series, run in partnership with SDxCentral, empowers leaders to make sharp decisions on IT infrastructure and AI deployment. Join 400+ peers from enterprise, hyperscale, and top IT infrastructure and architecture innovators to shape the future of compute—on-prem or in the cloud.

  • 400+ Decision-Makers for IT Infrastructure, Architecture, AI, HPC and Quantum Computing
  • 60+ industry-leading speakers at the forefront of innovation across cloud and on-prem compute
  • Hosted in partnership with SDxCentral

DCD Investment

Connecting senior dealmakers driving the economic evolution of digital infrastructure…

The world depends on digital infrastructure, and there’s never been more pressure on the industry to scale at speed. The Data Center Dynamics Investment series helps the leading dealmakers behind this growth to make informed decisions faster, through top-tier content, tailored networking, and best-practice sharing.

  • Dynamic Programme: A brand new format including leadership roundtable discussions allows for 2025 attendees craft their own agenda at the Forum.
  • 50 Speakers: The C-suite operators, leading investors, and advisors in data centers are converging to strategize on the industry’s evolving landscape.
  • Exclusive Networking Opportunities: The Investment Forum is separated from the main DCD Connect programme and show floor, offering private networking and dealmaking opportunities to take place in an optimal setting.

Secure your pass for three-colocated events September 16-17 – DCD>Connect, DCD>Compute and DCD>Investment.

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Event Newsroom
  • Events
  • Fintech & Insurtech

This month’s cover star, Dr. Noxolo Kubheka-Dlamini – Chief Digital and Information Officer at Telkom Consumer & Small Business, speaks to the process of leading an ongoing digital transformation

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Telkom: More Than a Telco

Our cover star talks us through the process of leading an ongoing digital transformation that is pragmatic, strategic and embedded in business goals at South Africa’s largest telecommunications platform provider. “By the time we entered the mobile space in 2010, the market was already saturated,” explains Dr. Noxolo Kubheka-Dlamini, Chief Digital & Information Officer at Telkom Consumer & Small Business. “Our ambitions were constrained by limited capital, inherited legacy systems, regulatory shackles, and the sheer inertia of being a former state-run monopoly.” However, Telkom’s “willpower and commitment never faded” resulting in “notable and consistent performance against all odds”. Today, Telkom is playing a pivotal role in ensuring access to meaningful connectivity, driven by the company’s vision to become South Africa’s digital backbone: bridging the digital divide and enabling inclusive participation in its digital economy.

Kynegos: Shining a Spotlight on Transformation, Innovation and Sustainability

Kynegos, a spin-off from Capital Energy, is a business built on strategy. It exists to develop technological solutions for strategic industries. Capital Energy needed an independent platform that could scale digital solutions beyond the energy sector, and foster collaboration with startups and technology centres. Kynegos has filled this gap, and is being leveraged to create co-innovation ecosystems. This allows Capital Energy to develop digital tools that address current and future industrial challenges, keeping the company’s finger on the pulse. We spoke to CEO Victor Gimeno Granda, about its backstory, its values, and the road ahead. “Not only do we develop digital assets for the renewable sector, but for green data centres as well. My perspective is that sustainability is going to be more relevant than ever in the next 18 months.”

York County: The Human Side of AI

York County’s IT team has spent the past decade redefining what local government tech can and should be. From pioneering community cybersecurity workshops to forging statewide collaboration through ValGITE, the county has systematically brought innovation into its operations. This broad portfolio of initiatives has strengthened infrastructure and elevated service delivery. And also earned York County the number one spot in the Digital Counties Survey for jurisdictions under 150,000 population.

“Since I became deputy director eight years ago, this has been one of my goals,” reflects Tim Wyatt, director of information technology at York County. “And over the last eight years, we’ve been in the top 10, but we finally landed that number one place. I think it’s a great reflection for my team, the county, and all the dedication to try to do what’s right by the citizens. It’s just something I’m incredibly proud of. I think it accurately reflects the hard work of my team.”

Wade Trim: Bridging the Cybersecurity Skills Gap

Wade Trim provides consulting engineering, planning, surveying, landscape architecture and environmental science services to meet the infrastructure needs of government and private corporations. With a cybersecurity skills gap leaving vacancies unfilled, Wade Trim’s Senior Manager of Information Security, Eric Miller, spoke with Interface about how stepping away from education-focused rigidity could unlock swathes of latent talent. “Our industry puts emphasis on certifications. However, being passed over for jobs because you don’t have a particular certification or degree in favour of someone fresh out of college has shown me that the best candidates are those that can tell me their story. What brings them to this point in their career? Tell me what qualifies you for this role. That’s how I interview.”

York Catholic District School Board: York Catholic District School Board: Community and Communication at the Heart of IT Strategy

The challenges facing an IT leader in 2025 call for a new kind of approach. One that favours partnerships over transactions, collaboration over competition, and centres people rather than technology for technology’s sake. These perspectives ring especially true in an organisation like the York Catholic District School Board (YCDSB). It emphasises values like “service, community, collaboration, and fait rather than academic excellence alone,” explains Scott Morrow, YCDSB’s Chief Information Officer (CIO). “It’s not actually about the technology; it’s about enablement.”

We spoke with Morrow to learn more about his approach to IT leadership. From building and maintaining a team amid the IT talent crisis, to driving digital transformation initiatives across the organisation. And broader strategic objectives across a changing technology landscape increasingly defined by cybersecurity and the rise of AI.   

Click here to read the latest edition!

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • People & Culture

Security, AI, and Digital Resilience: A look inside Visions CIO + CISO 

The cybersecurity landscape has never been so fast-moving or complex. The stakes have never been higher. A worsening geopolitical reality and increasingly sophisticated cyber threats mean that the role of security leaders is more pivotal than ever as devastating cyber breaches become a matter of “when,” not “if.” It’s a time for information and skill sharing, networking, and collective action in an industry facing a more challenging future than ever. 

Visions CIO + CISO Summit brings together executive security and technology leaders and experts from the largest organisations in multiple industries to network and learn from the people driving innovation in the IT and cyber spaces. This year’s event took place between April 28-30, and featured 8 tentpole sessions, over 30 presentations from key industry figures, and more than 30 speakers across the various panels, fire-side chats and peer-to-peer round tables that comprise the rest of the event. Speakers and solutions providers at this year’s event included Illumio, Threatlocker, LastPass, Claranet, Okta, Covertswarm, Intruder, and Ripjar RPC Services. Also in attendance were IT and security professionals from large scale enterprises, including Currys, Astley Digital, 24/7 Home Rescue, H&M Group, IBM, MUFG (Mitsubishi Financial Group), Federated Hermes, Deliveroo, Experian, Saint-Gobain, and Nordea GSK.

At the event, and afterwards, we were lucky enough to catch up with some of the leaders speaking at Visions and get their perspectives on key trends affecting the IT space — from the ever-relevant issue of security to AI and digital resilience.  

Natwest

Ramit Sharma — Vice President & Lead Engineer

1. What’s the general outlook for the IT and fintech sectors right now? Is this a scary time? An exciting one?

“It’s an exciting time, particularly within the UK banking sector, where we’re seeing a real shift toward customer-centric innovation. Financial institutions are working hard to deliver seamless, secure, and personalised experiences—often by leveraging cloud, AI, and advanced analytics.” 

“There’s a strong emphasis on modernising legacy systems, improving digital onboarding, and enhancing fraud prevention without compromising user experience. This push for technology-driven customer satisfaction is creating space for smarter, faster, and more agile solutions—making it a great time to be contributing to the evolution of digital trust and transformation in financial services.”

2. What are some of the challenges organisations are facing that you can help them with? What problems are they asking you to solve?

“Many organisations are grappling with how to secure cloud environments at scale without slowing down innovation. Key challenges include visibility across hybrid or multi-cloud setups, managing identity and access with precision, and operationalising zero trust.” 

“There’s also a strong demand for integrating security earlier in the development lifecycle—what we often refer to as shifting security left. People are asking how to reduce complexity, automate controls, and move away from reactive postures to proactive, real-time risk mitigation.”

Federated Hermes 

Enis​​​​ Sahin — Head of Information Security

1. What kind of outlook does an organisation like Federated Hermes have right now towards the industry? Is this a scary time? An exciting one?

2025 is shaping up to be a very dynamic year for the markets at large. There are rapid developments, from geopolitics to booming technology innovation with AI, that are impacting how the markets move as well changing the environment we operate in as a business. As a global asset manager, Federated Hermes is staying abreast of these changes to ensure we can be where the markets are, whilst maintaining efficiency in our operations for strong profitability. 

2. What problems are people asking you to solve right now?

The ever changing world of cyber has historically been difficult for businesses to decipher. In the last few years, it has become even more difficult to keep up, with the advent of AI and how it is changing the technology landscape. Whilst businesses are trying to understand this new technology and embed it into their products and operations, cyber-criminal enterprises are leaping ahead in innovation and starting to leverage it in novel ways. The challenge this brings is two-fold.”

“On one hand, businesses are trying to find the right use cases for AI to get their return on investment at every level. This applies to core business functions, as well as Technology departments and the Security organisations. As cyber strategists we are now being forced to be innovators ourselves and not just passive consumers of the latest products and market trends. This brings a new perspective to how we design controls, build our roadmaps and prioritize our budget items. Boards and executive teams are looking for Security teams who are embracing AI and maximizing the effectiveness and efficiency of their programmes.” 

“The second challenge is on the defensive side. The average person, as well as the average corporate employee, is lagging behind in understanding what the latest AI models are capable of, let alone understanding how they can be used to conduct cybercrime. Working in security, we find ourselves in a situation where we both need to find ways to keep up with cyber criminals to defend our enterprises, as well as keep educating our staff and management teams so that we can bring them on this journey.” 

Astley Digital 

Martin Astley — Chief Information Security Officer

1. Would you say this is an exciting time for Astley Digital?

“Astley Digital is at a pivotal point in its journey, experiencing remarkable growth and expanding our service offerings. We’re actively exploring partnerships with innovative cybersecurity companies like ThreatLocker, enabling us to provide even more robust endpoint security solutions for our clients.” 

“Additionally, the evolving landscape of cybersecurity is presenting us with unique opportunities to leverage AI for predictive threat analysis, streamline incident response, and enhance our managed security services. This moment is particularly exciting as we are positioning ourselves not just as a service provider but as a thought leader in cybersecurity strategy, risk management, and digital transformation for businesses across various sectors.”

2.  What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?

“Organisations today are grappling with a rapidly changing threat landscape, and one of the most significant challenges is maintaining a strong cybersecurity posture amidst evolving threats. At Astley Digital, we address critical issues such as:

“Endpoint Security: Many organisations struggle with managing endpoint security across remote and hybrid workforces. We provide comprehensive solutions that restrict unauthorised software and applications, preventing potential breaches and maintaining data integrity.”

“Third-Party Risk Management: Ensuring third-party vendors maintain security standards is another pressing concern. We work closely with our clients to assess, monitor, and mitigate third-party risks to prevent supply chain attacks.”

“Incident Response and Recovery: Companies are seeking rapid and effective incident response strategies. We offer real-time monitoring, response planning, and post-incident analysis to minimise business disruptions.”

“Regulatory Compliance: Compliance is a growing concern, especially in highly regulated industries. Our team assists with implementing frameworks that align with industry standards, ensuring data protection and reducing legal risks.”

S&W 

Mark Hendry — Partner

1. Why is this an exciting time for your company?

“We are really fortunate to have reach and presence with clients across different sectors. We have professional service specialisms that respond to many of the trickiest and most important strategy and skill challenges that clients face; technology, cyber security, AI, data, and digital regulations to name a few. Not only is it a great time to be helping clients with those issues and helping them make their businesses more capable, effective, successful and resilient, from a selfish perspective it’s an incredible privilege for our people to be trusted by clients to help with these super interesting initiatives.”

2. What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?

“We help clients with everything from assessing and improving their resilience positions, to complying with the intersections of a range of existing regulations, frameworks and standards, through to future gazing and thinking about what’s possible through challenging the status-quo.”

“Lately that has included a lot of work on things like AI readiness, development of use cases, working on AI explainability and the human element of potential resistance to the kinds of change that AI and other emerging tech are delivering.” 

“Of course an evergreen core of our work is digital resilience, including cyber security, so we do a lot on ensuring that new technology adoptions including those with AI sprinkled throughout them, are digitally and operationally resilient by design.” 

Deliveroo

Oliver Jenkins — IT Audit  Senior Manager

1. Why is this an exciting time for Deliveroo?

“We’re at a turning point where AI is no longer a side conversation—it’s embedded in the way Deliveroo operates. That shift brings real momentum and urgency to the work we do in securing AI adoption and protecting digital environments.”

2. What are some of the key challenges organisations are facing that you can help them with? What problems are they asking you to solve?

“The main concern is how to adopt AI without opening the door to unmanaged risk. Businesses know they can’t sit this one out, but they’re looking for help building the right guardrails to manage risk; especially with evolving regulation and the rise of AI-powered threats like deepfake vishing and advanced phishing.”

Bilfinger

Nnamdi Ozonma — Information Security Officer UK & Nordic Regions

1. What are you here at Visions to discuss with your peers in the cybersecurity and IT space? 

“The first panel I was part of was the Threat Detection & AI Panel Discussion. We were looking at establishing trust, mitigating risks, and safeguarding security in the age of AI. I focused on how to balance the benefits of AI with the challenges of building trust, managing risks, and ensuring security.”

“Then, I had a deep dive into looking at an age where individuals don’t verify, they just take information, no longer researching to see if the information is correct.”

“I always remain sceptical, whilst understanding the value of efficiency. AI is now embedded in so many tools, but now the main concern is the people within the organisation. Monitoring and education are essential. People will often try to find a shortcut and the easy way to go about things. Until training, governance and understanding is at a level where there can be trust, I suggest turning it off.”

Ripjar

Nick Cooper — Vice President, Information Security

1. These are challenging times for cybersecurity teams. How has 2025 been going for you and Ripjar? 

“Ripjar utilises new and emerging technology to solve customer problems in cyber threat investigations and anti-financial crime compliance. We’ve been able to help organisations achieve record results – identifying connections, anomalies and potential risks, while reducing false positives and increasing true positives – leading to best-in-class results in many industries. We’re excited to be sharing that technology, alongside further innovations, with other organisations as we expand our global coverage.”

“The advent of generative AI creates vast risks and opportunities. It also shifts perspectives on existing machine learning and artificial intelligence technologies. It has been exciting to see how the newest AI can be combined with non-generative AI and other technologies to create new solutions to the problems that keep our customers awake at night.”

2. What are some of the challenges organisations are facing that you can help them with? 

“Ripjar serves customers in several areas. Our anti-financial crime customers are trying to make sense of the ever-expanding business risks presented by their customers and counterparties in a tumultuous world. We’re able to help them in that journey, whether it’s responding to changing Russian or Middle East sanctions or aligning with the massive political changes that have impacted PEP (politically exposed persons) regimes all around the world.”

“Using foundational AI, we find broad risks in the media – which is often referred to as negative news or adverse media. That means reading through millions of daily news articles to identify risk signals which are important to those handling the world’s global payments or trading internationally. Agility is a key requirement for our customers, and machine learning and AI make it possible to make sense of huge quantities of structured and unstructured data quickly and accurately.”

“Our cyber customers are sophisticated threat investigators working in complex environments, including a number of MSSPs. They rely on our data fusion and investigations software to identify potential threats to their data and ultimately their businesses.”

Looking at the future

The shadows of GenAI, looming threats, and a shifting regulatory landscape loom over the global cybersecurity and IT communities, but the tone is also optimistic. While every leader we spoke to at Visions CIO + CISO acknowledged the threat posed by emerging technologies, many were also excited by the potential of GenAI tools to detect threats and help strengthen cybersecurity defenses.

Given how quickly the circumstances surrounding cybersecurity have changed in just a few short years, it’s almost impossible to predict where we’ll be by the end of the decade. However, the experts we spoke to at Visions are approaching the future with both eyes open — watchful for new risks, and determined to capitalise on new opportunities. 

The next Visions CIO + CISO Summit (Autumn, UK) is taking place at the Allianz Stadium in London on 13 – 15 October, 2025. Learn more and register to attend here.

  • Cybersecurity
  • Events
  • Host Perspectives

Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

Discover tomorrow’s tech today

Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

GLOBAL INSPIRATION, LOCAL IMPACT

Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

Where the future of technology takes the stage

Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

Register for FREE

Register for your Ticket

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Event Newsroom
  • Infrastructure & Cloud

February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open…

February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open Banking revolution

Welcome to the latest issue of Interface magazine!

Read the latest issue here!

NatWest: Banking open for all

Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”

EBRD: People, Purpose & Technology

We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”

Begbies Traynor Group: A strategic approach to digital transformation

We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”

University of Cinicinnati: Where innovation comes to life

Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”

IT: A personal career retrospective

It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”

Read the latest issue here!

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech