Anna Collard, SVP Content Strategy & Evangelist KnowBe4 – Africa, on leveraging AI-driven cybersecurity systems to fight cybercrime

Artificial Intelligence is no longer just a tool. It is a game-changer in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​.  

In 2025, research shows AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants. They function as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don’t just enhance cybercriminal tactics, they may fundamentally change the cybersecurity battlefield. 

How Cybercriminals Are Weaponising AI: The New Threat Landscape 

AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats. Thus enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes notes, while GenAI has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods. Attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents. Autonomous AI systems are capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. 

Here is a list of common (ab)use cases of AI by cybercriminals:  

AI-Generated Phishing & Social Engineering 

Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages. Without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing. Attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams. These are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime​. 

Deepfake-Enhanced Fraud & Impersonation 

Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.  

Cognitive Attacks  

Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence, the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation. By everaging digital platforms, state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content. They are subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.  

The Security Risks of LLM Adoption 

Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces significant security risks. Especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries. This enables new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.  

Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems. Dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs. Potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications. 

Additionally, bias within LLMs poses another challenge. These models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgements, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment, especially in RAG-powered models, are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. 

When AI Goes Rogue: The Dangers of Autonomous Agents 

With AI systems now capable of self-replication, as demonstrated in a recent study, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously. Particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue. This means robust oversight, security measures, and ethical AI governance essential in mitigating these risks. 

The Future of AI Agents for Automation in Cybercrime 

A more disruptive shift in cybercrime can and will come from AI Agents. These transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use. However, in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms. They can automatically compose and send fake executive requests to employees. And, for example, analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don’t just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. 

How Defenders Can Use AI & AI Agents 

Organisations cannot afford to remain passive in the face of AI-driven threats. Security professionals need to remain abreast of the latest developments. Here are some of the  opportunities in using AI to defend against AI:  

AI-Powered Threat Detection and Response

Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns. These might otherwise go unnoticed. AI can create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour. Making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. 

However, despite the potential of AI-agents, human analysts still remain critical. Their intuition and adaptability are essential for recognising nuanced attack patterns. They can leverage real incident and organisational insights to prioritise resources effectively. 

Automated Phishing and Fraud Prevention

AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve. 

User Education & AI-Powered Security Awareness Training

AI-powered platforms deliver personalised security awareness training. They can simulate AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​. And strengthen their individual susceptibility factors and vulnerabilities.  

Adversarial AI Countermeasures

Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques. For example, deploying deception technologies – such as AI-generated honeypots – to mislead and track attackers. As well as continuously training defensive AI models to recognise and counteract evolving attack patterns. 

Using AI to Fight AI-Driven Misinformation and Scams

AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like those shown by research project Countercloud or O2 Telecoms AI agent “Daisy” show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. 

In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates. And how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency. While at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.  

To stay ahead in this AI-powered digital arms race, organisations should:  

  • Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. 
  • Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. 
  • Deploy AI for proactive cyber defense, including threat intelligence and incident response. 
  • Continuously test your own AI models against adversarial attacks to ensure resilience. 
  • Cybersecurity
  • Data & AI

Mike Puglia, General Manager, Kaseya Cybersecurity Labs, on how the need for regulatory support to better support industries when tackling cybercrime

Cyberattacks keep coming hard and fast, but things are beginning to change. In the past few months, law enforcement has announced arrests of three people in the Marks & Spencer breach, seven members of the hacking group NoName057, five affiliates of Scattered Spider and also disrupted the infrastructure of gangs such as Flax Typhoon, Star Blizzard and others.  

Earlier this year, the UK retail industry felt the pressure. Brands, including Marks & Spencer, Harrods and Co-op – and by proxy, their customers – became victims of the hacking group, Scatter Spider. Other businesses are now on high alert as this wave of security breaches is expected to continue. For as long as bad actors can reap rewards and the risk of consequences remains small, they will keep attacking. Ransomware-as-a-service lowers the bar to entry further, allowing even those without specialised skills to launch successful ransomware campaigns.

Along with the threats, regulatory pressure on businesses is growing. Organisations must be able to prove they have strong security defences in place or risk paying hefty fines for non-compliance. However, this means we are essentially punishing the victim, not the perpetrator. By putting the onus on the victims to protect themselves, we are missing an important truth… Because there is no bullet-proof defence, even the best security strategies will not end cybercrime for good.

It’s Time to Treat Cybercrime as Crime

What the industry needs instead is a change in how we approach cybercrime. Rather than blaming the victims, we must start treating it as the serious criminal activity it is. It is high time we addressed cybercrime’s fundamental drivers. Opportunity, motive and the widespread perception that criminals can still get away without punishment. As is the case with physical crime, it takes a two-pronged approach to curb cybercrime: Prevention – and an effective response.

Those who attempt physical theft, for example, face trials and potentially prison. While we have seen a growing number of cybercriminals arrested in recent months, the truth we are only scratching the surface. In the digital world, everything is accessible from everywhere, all the time. This creates an inherent vulnerability that makes perfect protection impossible. In many cases, it also makes it much harder to track down the offenders and hold them accountable.

The Problem with Cryptocurrency and Jurisdiction

The cybercrime landscape has also undergone a significant transformation. While in the past, hackers were mostly focused on stealing financial data, there has been a dramatic shift towards ransomware. It’s far easier to encrypt an organisation’s data and demand a ransom than finding buyers for stolen credit card info.

This transformation has further accelerated because cryptocurrency allows cyber attackers to be paid in anonymous currency. Anywhere in the world, at any time. Previously, criminals had to physically collect payments or transfer money to traceable bank accounts. Now, they can operate with anonymity whilst easily converting their loot into real euros, pounds and dollars. This means ‘following the money’ is no longer a useful way for law enforcement to track nefarious activity. If we made it impossible for criminals to anonymously convert cryptocurrency into real currency, we could change the risk-reward calculation.

The second key issue with fighting cybercrime is the question of jurisdiction. Many cybercriminals are based in countries where western governments have no recourse. When hackers operate from non-cooperative jurisdictions, it may be impossible to extradite them. And they may find their activities tolerated by their local government or even supported.  As we have seen with the recent arrests – the threat actors were outside of Russia and China – where many attacks come from.

These two factors – anonymous payment systems and safe havens – create an environment where cybercrime can and will continue to flourish. While organisations can do their best to make it harder for criminals to attack, it is foolish to believe individual businesses will be able to solve the cybercrime problem on their own.

Stop Blaming the Victim

So, what needs to happen? First, the victim-blaming approach must change. We simply cannot regulate every business to become an impenetrable fortress. When a person is physically robbed, police respond to investigate the crime and help recover stolen property. With cybercrime, victims face reputational damage, fines and higher insurance premiums. Incidents often raise questions about where the business’ cybersecurity strategy failed, rather than a recognition that a crime has been committed against them.

A first step forward towards solving the cybercrime problem would require governmental and societal recognition that cyberattacks represent crimes against businesses and individuals, not merely failures of those organisations to adequately defend themselves. While many countries have ramped up policing efforts against cybercrime, these are generally underfunded considering the scale of the problem.

Secondly, we need to urgently address the anonymous payment systems that keep fuelling cybercrime. This is not an easy problem to solve, but governments must find better ways to trace and regulate how cryptocurrency is converted into real money.

It is also time we introduced real and severe consequences for cybercriminals. The number one deterrent to any type of crime is fear of being caught and punished. The internet has essentially eliminated this, enabling hackers to operate from nations that turn a blind eye. To address this will require more political pressure on ‘safe harbour’ countries to charge, punish and extradite cybercriminals. Where nations refuse to cooperate, potential sanctions such as restrictions on internet connectivity might force governments to reconsider their tolerance for criminal activities.

Finally, we need to acknowledge that regulations such as GDPR, PCI and NIS have their limits. Despite increasingly complex compliance requirements, cybercrime has continued to grow. While regulations can provide critical and much-needed guidance to businesses, they must be combined with properly funded law enforcement – empowered with tools to bring criminals to justice across jurisdictions.

To truly disrupt the criminal ecosystem, systemic changes are needed. We are starting to see governments give law enforcement the tools they need, but it is very early in that process. Because ultimately, we will not solve the cybercrime problem with defence measures alone.

About Kaseya

At Kaseya, our mission is to empower you to simplify and transform IT and cybersecurity management with innovative platform solutions.

Our Mission:

Since 2000, Kaseya has delivered the technology that IT departments and managed service providers need to reach new heights of success. More than 500,000 IT professionals globally use Kaseya products to manage and secure 300 million devices.

Kaseya’s commitment to our customers goes beyond listening to your needs and puts words into action to deliver innovative solutions that empower your business. But we don’t stop there. Kaseya’s first-of-its-kind Partner First Pledge program shares the risk our partners experience because we know a true partner is with you through the ups and downs of life.

  • Cybersecurity
  • Digital Strategy

Three in four senior corporate executives believe increasing financial investment is necessary to protect intangible trade secrets, according to new analysis commissioned by global law firm CMS and conducted by The Economist Intelligence Unit…

A new report released today commissioned by global law firm CMS and conducted by The Economist Intelligence Unit reveals that trade secret protection is rapidly rising up the corporate agenda as firms widely recognise the commercial imperative to protect vulnerable assets in light of more business conducted online and across borders. 

With more companies relying on an ever-greater proportion of intangible or ‘secretive’ assets, the findings show a marked shift in how executives are planning to tackle employee leaks, supply chain vulnerability, corporate espionage and cyber-attacks. According to a global survey of 314 senior executives across a range of industries, the three most valuable types of proprietary information held by organisations are customer databases (42%), product technology (40%), and R&D information (23%).

The report, ‘Open secrets? Guarding value in the intangible economy’, reveals that trade secret protection is no longer just a concern for the legal department, but a top priority at the board and C-suite level. The majority (75%) of respondents agree that increasing financial investment was necessary to protect their trade secrets. Measures must be taken to raise awareness of these assets more widely among employees, with 28% of respondents viewing a lack of in-house experience with trade secrets as a safeguarding challenge.

The most significant threats to the security of trade secrets are weaknesses in cybersecurity (49%) and employee leaks (48%). As firms increasingly store and share sensitive information across virtual and distributed workforces, companies face a range of unpredictable insider threats, including intentional leaks from disgruntled employees. This is the biggest concern for the UK, whilst the fear of cybercrime is front-of-mind for business leaders in France, China and the US, worsened by poor internal cybersecurity expertise.

Tom Scourfield, Co-Head of IP Group at CMS said: “Fifty years ago, a company’s value was derived solely from its physical capital. Today, the world’s most successful firms are built on intangible assets that are often secretive by nature – algorithms, customer data, product formulae. This report shows that firms must start taking a more holistic approach to protecting these intangible assets, from computer software to company values balancing restrictions with incentives – and importantly engage every level of their workforce. Without this strategy, protecting trade secrets will remain an uphill battle for many.”

Significantly, four out of five of the top measures that companies are planning to implement over the next two years focus on minimising employee leaks. These range from harsher measures such as closer surveillance of employee’s electronic activity through to more collaborative approaches that centre on improving the company culture and introducing innovative staff incentives.

“Willingness to snoop” is highest in China, Singapore and the United States. It is also a top preferred measure for executives in Technology, Media and Telecommunications, with 36% of respondents planning to implement surveillance over the next two years, reflecting the growing tensions between employers and employees in the technology sector. Efforts to improve work culture are clearly felt more widely in other industries, with almost a third (31%) calling for corporate values to shift towards encouraging trade secret protection.

As companies become increasingly wary of cybercrime and ransomware attacks, the majority (82%) agree that leveraging cybersecurity software is key to protecting their organisation in the long-term. However, only half (53%) believe it is the most effective deterrent or have already restricted digital and physical access to confidential information (55%). 

Hannah Netherton, Employment Partner at CMS adds: “It’s overwhelmingly clear that the threat of employee leaks is driving a need for new strategies to guard valuable assets. Companies must find the right balance between perfecting their cybersecurity protections and creating a healthy company culture that incentivises trade secret protection and encourages speaking up through appropriate channels – even the most rigorous of protocols won’t prevent every employee leak or a disgruntled whistleblower. 

“The pandemic has opened doors to a digital workspace, where it’s easier for employees to accidentally or purposefully access and expose confidential information. It is impossible to protect trade secrets if employees are not aware of the sensitivities around these assets, so putting the right values and measures in place has never been more important to an organisation’s success.”

Aukje Haan, Co-Head of Commercial at CMS added: “With the introduction of the Directive on Trade Secrets, businesses will get a range of options to safeguard their most prized proprietary information. However, there are prerequisites to be able to invoke those options. Identifying and taking reasonable steps will be crucial, from NDAs, cybersecurity efforts through to employee regulation, as well as specific requirements depending on the nature of the business, e.g., online businesses will need to take more cybersecurity measures whereas manufacturing companies will need to take more physical measures on the factory floor.“

With industrial organisations ramping connectivity to accelerate digital transformation and remote work, threat actors are weaponising the software supply chain and ransomware attacks are growing in number, sophistication and persistence.

A new report from Nozomi Networks Labs finds cyber threats to industrial and critical infrastructure have reached new heights as threat actors double down on high value targets. With industrial organisations ramping connectivity to accelerate digital transformation and remote work, threat actors are weaponising the software supply chain and ransomware attacks are growing in number, sophistication and persistence. 

“This report leaves no doubt that the time for action is now,” said Nozomi Networks Co-founder and CTO Moreno Carullo. “The recent Oldsmar, Florida, water system attack and the ongoing SolarWinds investigation are dramatic reminders that the critical infrastructure and other systems that we rely on are vulnerable and at constant risk of attack. Understanding the effectiveness of defenses against the emerging threat and vulnerability landscape is vital to success.” 

Nozomi Networks’ latest “OT/IoT Security Report,” gives cybersecurity professionals an overview of the OT and IoT threats analysed by Nozomi Networks Labs security research team. The report found: 

  • Ransomware activity continues to dominate the threat landscape, growing in sophistication and persistence. In addition to demanding financial payments, Ryuk, Netwalker, Egregor and other ransomware gangs are exfiltrating data and deeply compromising networks for future nefarious activities. 
  • Supply chain threats and vulnerabilities show no signs of slowing. The unprecedented SolarWinds attack not only infected thousands of organisations including U.S. Government agencies and critical infrastructure, but it also demonstrates the massive potential for attack via supply chain weaknesses. 
  • Threat actors are targeting healthcare. Nation states are using off-the-shelf red team tools to execute attacks and perform cyber espionage against facilities involved with COVID-19 research. Ransomware crews are targeting healthcare providers and hospitals, in some cases disrupting patient treatment. 
  • Analysis of 151 ICS- CERTs published in the last six months found memory corruption errors are the dominant vulnerability type for industrial devices.

“Urgency has never been higher. As industrial organisations race toward digital transformation, threat actors are taking advantage of greater OT connectivity to create attacks that aim to disrupt operations and threaten the safety, profitability and reputation of enterprises around the globe,” said Nozomi Networks CEO Edgard Capdevielle. “While threats may be on the rise, the technologies and practices to defeat them are available today. We encourage organisation to act quickly to implement the recommendations in this report.  It’s never been more important or more possible to take the necessary steps to detect and defend critical infrastructure and industrial operations.”

Nozomi Networks’ “OT/IoT Security Report” summarises the biggest threats and risks to OT and IoT environments. The report provides information on 18 specific threats that IT and OT security teams should study as they model threat vectors and evaluate risks across operational technology systems. It includes 10 key recommendations and actionable insights to improve defenses against the current threat landscape.