Kevin Janzen, CEO of Gaming & EdTech AI Studio at Globant, on how AI will change the way games are made and expand the market

Every major games studio is now experimenting with artificial intelligence. From generating NPC dialogue to automating animation and video assets. AI is promising to speed up production and lower costs for developers.

According to Boston Consulting Group (BCG), the gaming industry finds itself at a crossroads…. Looking to gain the momentum it felt between 2017 and 2021, where revenue surged from $131 billion to $211 billion. And AI could be at the forefront of this pivotal moment. 

But as AI becomes central to how games are built, studios face a major challenge. Adopting automation without losing authenticity. For developers and retailers alike, this becomes a business concern that deserves close attention. Creativity sits at the heart of gaming, and the choices studios make today will influence what reaches players tomorrow. For the technology channel, this transformation means faster release cycles, broader product diversity, and a need for sharper forecasting.

A New Phase in Gaming’s Evolution

For most of gaming’s history, every era has been defined through visuals. Each generation has delivered stylistic, immersive worlds, such as the blocky charm of Minecraft to the cinematic realism of Red Dead Redemption 2. 

Now, the real change is happening behind the scenes. AI is reshaping how games are built and experienced. Development teams are using AI to handle time-consuming tasks such as vast world-building creation and animation. This frees artists to focus on what players remember – the design and storytelling.

Players are already seeing the benefits in their gameplay. AI lets games adapt or adjust difficulty based on players’ skill levels, or change dialogue based on a player’s choices. This makes gaming worlds feel realistic, responsive and more personal.

With budgets continuing to climb for gaming studios, these new features matter. AI gives studios breathing room to experiment. Smaller teams can take creative risks, and established developers can experiment and test new ideas without derailing production. However, efficiency and costs aren’t the only gains as AI is creating space for developers to be more ambitious than ever before.

Automation and Artistry

For all its promise, AI also brings creative risk. Gamers notice when a quest feels repetitive or when dialogue sounds mechanical. And if AI is used carelessly, developers risk losing authenticity.

That sense of care is what keeps players invested. Whether it’s hand drawn detail, or play-driven choices. Games like this show what happens when technology supports vision rather than replacing it.

That’s why the industry’s embrace of AI is such a gamble. Used well, AI can help developers create richer, more personalised worlds. But used carelessly, it risks stripping away the artistry that makes games memorable.

The Ripple Effect Across the Supply Chain

As AI becomes a standard tool, development processes are speeding up and opening new creative possibilities. Independent studios now have access to the kind of production power once limited to major developers. That shift means faster pipelines and ultimately, more games reaching the market.

For retailers and resellers, this brings both opportunity and pressure. A consistent stream of releases can guarantee sales across the year, while lower production costs encourage more niche or experimental games that appeal to new audiences. Greater variety and volume benefits the market, but it also makes it harder to predict which games will break through.

Players are becoming more aware of how games are made and AI’s role in development. They’re starting to ask not only how a game plays, but also how it was built. Understanding the intent behind a studio’s use of AI – one that uses AI as a genuine creative tool and those that rely on it as a shortcut – will help retailers anticipate demand and spot the games with long-term potential.

The Right Way to Play the AI Game

The studios using AI most effectively have a few things in common. They keep AI in the background, using it to manage routine work, such as generating textures and landscapes, so creative teams can focus on narrative and emotional tone.

They also use AI to make experiences more personal. Thoughtful application of adaptive systems allows games to respond to individual play styles, adjusting difficulty and pacing to keep players engaged. This level of design deepens engagement and gives players a sense that the world responds to them personally.

Another area where AI is also making an impact is making games more inclusive. More than 400 million people around the world play with a disability, and new tools are expanding access – from adaptive controls to real-time translation that lets players connect across languages. As gaming becomes more diverse, the audience grows for everyone, including retailers, who can reach a larger, more engaged customer base.

When automation complements gaming artistry, it strengthens the relationship and trust between the developer and the player. Creativity becomes the main focus again, and that’s what keeps players loyal.

Balancing Innovation and Trust

AI is fast becoming integral to how games are conceived, built, and experienced — and that shift will reshape the entire value chain. For developers, success will come from balancing automation with artistry, ensuring that AI enhances creativity rather than replaces it.

For retailers, distributors, and partners, this transformation offers both opportunity and responsibility. A faster, more diverse release pipeline will bring fresh sales potential, but also greater complexity in forecasting and curation. The winners in this new phase of gaming will be those who can spot titles where AI adds genuine depth, inclusivity, and player connection — not just production speed.

Handled thoughtfully, AI won’t just change how games are made, it will expand the market for everyone involved in bringing those experiences to players. That’s a game worth playing for the entire tech channel.

Learn more at globant.com/studio/games

  • Data & AI
  • Digital Strategy
  • People & Culture

JP Cavanna, Director of Cybersecurity at Six Degrees, on balancing the risks and benefits of AI in cyber defence strategies

Undeniably, AI is here to stay. Having become part of day-to-day life, it’s hard to remember what life was like without it. But when it comes to cybersecurity, is it causing more harm than good?

Recent research outlines that 73% of organisations have already integrated AI into their security posture. The technology is clearly becoming a cornerstone of modern cybersecurity. Organisations are turning to AI not just as a tool, but as a partner in security operations, leveraging its capabilities to identify malicious activity faster, guide investigations, and automate repetitive tasks.

For it to be truly effective, though, AI must be paired with human expertise – but this is where organisations are starting to become complacent. Given the growing sophistication of cyber-attacks, and even AI-powered attacks, many are removing the human element while expecting AI tools to do all the work for them, leaving them even more vulnerable to threats. This overreliance risks creating blind spots, where critical thinking, contextual understanding, and instinct are overlooked. Without the balance of human judgement, AI can amplify mistakes at scale, turning efficiency into exposure.

The Cybersecurity Paradox

This situation puts many organisations in a potentially difficult position. On the one hand, AI can significantly improve the efficiency of security operations. In the typical SOC, for example, AI technologies can process alerts in around 10-15 minutes. This represents a significant improvement over human analysts, who can easily require twice as long for the same task.

Aside from the obvious efficiency gains, applying AI to these repetitive, time-pressured processes can also significantly reduce the scope for human error. And in turn, take considerable pressure off security analysts. Going some way to battling alert fatigue, an increasingly well-documented and persistent problem. In these circumstances, valuable human experience and specialist expertise can instead be more effectively applied to complex investigations, strategic decision-making, and other higher-value priorities.

On the flipside, however, AI remains prone to generating inaccurate or misleading insights, and users may not realise they are applying the wrong information to potentially serious security issues. Similarly, habitual blind trust in AI outputs can easily erode performance levels and even introduce new vulnerabilities. There is also scope for sensitive data to enter public environments, with the potential to cause compliance issues. This kind of information can also reappear in future versions of the AI model in question, therefore resulting in further data exposure risks.

Parallels with IoT Adoption

The situation mirrors that seen in the early days of IoT adoption, where the rush to innovate would often override security considerations. In this current context, therefore, human oversight and vigilance are extremely important. Clear governance frameworks, defined accountability, and continuous monitoring must underpin any AI deployment. Therefore ensuring that innovation does not outpace risk management or compromise long-term resilience.

A Growing Arms Race

If that wasn’t challenging enough, threat actors are also in on the AI boom in what has already been described as an ‘arms race’. In practical terms, AI tools are already widely used to create more convincing phishing attacks free from some of the more obvious traditional tell-tale signs of criminal intent, such as imperfect grammar or a suspicious tone.

Deepfake technology has also raised the stakes. We’ve all seen how convincing AI-generated video has already become. This is now finding its way into real-world examples, with one fake video reportedly causing a CFO to authorise a large financial transfer as a result.

At the same time, technology infrastructure is constantly under attack by AI-powered tools. They can be used to analyse defensive systems and identify weaknesses faster than humans. The net result of these developments is that defenders constantly play catch-up, as they can only respond to new attack vectors once discovered. The underlying takeaway is that at present, AI cannot be trusted to operate autonomously. Instead, human intuition, scepticism and contextual understanding remain essential to spotting emerging tactics.

As attackers refine their methods at machine speed, organisations need to resist the temptation to match automation with automation alone. They must double down on strategic thinking and continuous skills development.

Balancing Benefits and Risk

So, where does this leave security leaders who are looking to balance the benefits and risks? Firstly, and to underline a fundamental point, while AI offers scale and speed, it cannot replace critical human oversight. Organisations should view AI as an enhancer, not a replacer. Success lies in promoting partnership, not substitution.

Strong governance is vital. This should start with clear AI usage policies that define what can and cannot be shared with AI tools, while proper data classification and access control ensure that sensitive information is protected. In addition, regular validation of AI outputs can help to prevent inaccurate or misleading results from being unnecessarily acted upon.

Then there are the perennial challenges associated with employee awareness training, which is vital for avoiding complacency and understanding the limitations of generative AI tools. Cyber leaders should also monitor how AI is being used inside and outside the corporate environment, as staff often experiment with tools on personal devices.

Get this all right, and security teams can put themselves in a very strong position to embrace AI, safe in the knowledge that they have the guardrails and processes in place to balance innovation and efficiency with effective human-led oversight. Ultimately, success will depend not on how much AI is deployed, but on how intelligently it is governed and refined alongside the people responsible for securing an organisation.

Learn more at Six Degrees

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and…

A 2026 survey of nearly 1,000 C-suite executives found that 87% of companies now use AI in their core operations. However, AI errors and rework continue to cost businesses over $67bn a year

Loopex Digital’s January 2026 analysis identified several common mistakes companies make when relying on AI.

1.  Giving AI Too Much Control in HR

AI-led hiring filters out 38% of top-level candidates before human review because it relies on keyword matching. Candidates respond by adjusting CVs to fit those words, often hiding real experience.

“When we started to use AI in our hiring process, we saw some strong candidates get rejected,” said Maria Harutyunyan, co-founder of Loopex Digital. “Out of 100 applicants, the 2 candidates that would’ve been hired didn’t make it because they used different wording instead of the exact keywords.”

How to fix this: “We simplified our job descriptions, removed buzzwords that didn’t matter, and limited AI to shortlisting. The quality of hires improved immediately,” said Maria.

2.  Trusting AI Notes Without Review

AI note-takers often struggle with background noise and poor audio, leading to inaccurate notes. In many cases, up to 70% of summaries focus on side comments rather than decisions.

“We tested 10+ AI note-takers across 50 of our regular meetings. Most of the main summaries ended up being jokes and half-finished sentences,” said Maria. “Key decisions were either unclear or missing entirely from the AI summary.”

How to fix this: “We limited AI notes to action points and decisions,” said Maria. “Everything else is filtered out or reviewed manually, cutting note clean-up from half an hour to minutes.”

3.  Letting Artificial Intelligence Replace Your Customer Support Team

When customers realise they’re speaking to AI, call abandonment jumps from 4% to 25%. Even when customers stay on the line, AI tools can get policy and pricing details wrong, leading to confusion, complaints, refunds, and extra clean-up work for support teams.

How to fix this: Use AI only for simple FAQs, not complex cases. Define clear escalation rules for cancellations, complaints, and legal issues and route those to a human immediately. Restrict your AI from creative responses in support, only letting it use approved templates.

  • Data & AI
  • Digital Strategy

Maxio analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Analysis of $40B+ in billings data shows vertical focus and AI innovation driving success, while growth inflection points emerge earlier than expected

Growth remains strong for B2B SaaS and AI companies, but  volatility is high, according to the B2B Growth Report by Maxio, a leading billing automation and revenue management platform. While the market is healthy overall, with the average company growing 18% year over year, more than 35% of companies experienced a decline, revealing an industry where growth increasingly depends on focus, discipline and execution rather than market momentum alone.

The report analyzed over $40 billion in billings data across 2,000+ companies from 2024-2025, revealing unexpected patterns in how growth varies by company size, business model, investment backing, and approach to AI. The findings challenge conventional assumptions about scaling thresholds, the universal benefits of AI adoption, and the predictability of growth trajectories.

“Growth didn’t disappear in 2025; it became harder to earn,” said Alan Taylor, Chief Operating Officer at Maxio. “The winners weren’t chasing every trend. Whether AI-native or traditional SaaS, the top performers stayed focused on solving real customer problems.”

Key Report Findings:

Growth is still the norm, but it’s not universal: Average company growth reached 18%, while aggregate market growth was closer to 13%, reflecting slower expansion among larger, more mature businesses. Nearly two-thirds of companies grew year over year, yet more than one-third declined. Down years remain common across all revenue bands.

Growth slows earlier than expected: The data revealed inflection points at around $5 million in billings with another slowdown beyond $25 million, not the typical $1 million, $10 million or $50 million marks, showing the operational challenges of scaling.

Vertical focus outperforms horizontal scale: Vertically focused companies grew faster than horizontal peers (20% vs 16%), reinforcing the value of specialization in competitive markets.

Capital helps, but doesn’t guarantee faster growth: Bootstrapped companies nearly matched VC-backed growth (20% vs. 22%), though scale differed dramatically with VC-funded companies nearly 4x larger. Private equity-backed companies focused more on profitability, growing 13% on average while skewing significantly larger than other cohorts.

AI accelerates, but only at the core: Truly AI-led companies, with AI central to product and positioning, grew fastest at 21%. However, AI-enhanced companies lagged at 16%, while non-AI companies quietly outperformed at 19%. This pattern suggests that AI adoption alone does not guarantee impact—AI implementation without clear value differentiation may not translate into competitive advantage.

“Average growth numbers only tell part of the story,” said Ray Rike, founder and CEO at Benchmarkit. “What stood out is how early growth friction shows up. Teams that identify where and why growth is accelerating will be best positioned to focus their resources on the market segments that provide faster growth.”

2026 Outlook

Despite a more competitive and complex environment, industry optimism is back and strong. Seventy-two percent of companies expect to grow faster in 2026 than 2025. However, leaders are entering the year with more measured expectations around buyer scrutiny, competition and the need for operational efficiency.

Sustainable growth is built, not assumed, the report found. Companies that understand their true growth levers, invest with intent, and maintain discipline as they scale will be best positioned to win in 2026.

To read the full B2B Growth Report, click here. 

About Maxio

Maxio is the billing and financial reporting platform trusted by over 2,000 SaaS, AI and subscription businesses worldwide. With $18B+ in billings under management, Maxio empowers finance teams to scale recurring revenue, automate quote-to-cash and deliver the insights needed to grow confidently.

Learn more at maxio.com

  • Data & AI
  • Digital Strategy

Interface issue 69 is live featuring Haleon, State of Montana, Techcombank, Publicis Sapient, Oakland County, Snowflake and much more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Haleon:

Digital & Tech Head Soumya Mishra reveals how the group behind power brands like Sensodyne, Panadol and Centrum, broke away from GSK and transformed so successfully. Haleon is itself a large organisation so separating from a huge parent company was a big challenge… “It was the biggest deal of its kind and the first to happen in this industry,” Mishra adds. “We were separating to create simplification, but we had to work hard to achieve that. There were a lot of processes and policies that didn’t make sense and needed an overhaul. This had to be backed by a culture shift that was properly communicated.”

State of Montana: Cybersecurity Through A New Lens

State of Montana CISO, Chris Santucci, explains the organisation’s drastic shift towards security, and how his team has become a shining example within the wider IT centralisation sphere… “Fixing security vulnerabilities came down to having built enough social capital and trust to correct. I like to stay slightly uncomfortable as a CISO and as a human, to keep challenging myself to deliver better services and greater value. The mission is to ensure Montana citizens get the support they need while keeping services secure and protecting data.”

Publicis Sapient: Driving Banking Transformations with AI

Financial Services Director Arunkumar Gopalakrishnan reveals how Publicis Sapient is developing the playbook for delivering successful AI-led digital transformations across the financial services landscape. “Working with Generative AI today feels like standing on a new frontier. It keeps us on our toes, but it’s also what drives us – to stay relevant, deliver outcomes and connect both worlds of business and technology.”

Techcombank:

Chief Strategy & Transformation Officer, PC Chakravarti explores the operating model, Data & AI foundations, culture and talent playbook, and the partnerships turning ambition into market leading outcomes at Techcombank in Asia. “Tech is not the limiting factor – it’s about supporting people and talent to leverage capabilities to enhance business models.”

Oakland County:

Sunil Asija, Director of Human Resources at Oakland County, talks building trust with collaboration and becoming employer of choice. “To build trust the culture needs to change from top to bottom, and it needs everyone to join in that good fight.”

Click here to read the latest edition!

  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud
  • People & Culture

Some Europe & Middle East CIOs anticipate up to 178% ROI on AI investments, with further efficiencies expected as Agentic AI scales

Enterprises have moved decisively from AI pilots to scaled implementations, driven by proven benefits and expectations of significant financial returns, according to the Lenovo Europe & Middle East CIO Playbook 2026 with research insights by IDC. Nearly half (46%) of AI proof-of-concepts have already progressed into production, with organisations projecting average returns of $2.78 for every dollar invested.

The 2026 Lenovo CIO Playbook: The Race for Enterprise AI, draws on insights from 800 IT and business decision makers in Europe and the Middle East. It captures a regional inflection point and reinforces the value proposition for enterprise AI as both real and immediate. It calls on CIOs to act now to avoid lagging competitors. The research marks a clear shift from AI experimentation to measurable value creation, with nearly all (93%) of those surveyed planning to increase AI investments in the next 12 months. At an average spending growth rate of 10%, and 94% anticipating positive returns.

Enterprise AI Adoption in Europe and the Middle East

AI is now recognised as a core engine of business reinvention and competitive advantage. However, AI adoption in the markets is progressing at different speeds. Reflecting varying levels of digital maturity, regulatory readiness, and investment capacity, and there is a clear overconfidence problem among CIOs. While 57% of organisations in Europe and the Middle East are approaching or already in late-stage AI adoption, only 27% have a comprehensive AI governance framework. Further limitations in data quality, in-house expertise, integration complexity, and organisational alignment are causing a mismatch between ambition and readiness.

With Agentic AI overtaking Generative AI as the top priority for CIOs in 2026, these factors will prevent many organisations from fully capitalising on AI’s potential, leaving significant returns unrealised. Moreover, 65% of organisations are focused on scaling Agentic AI across their operations within 12 months, but only 16% report significant usage today, with the majority still piloting or actively exploring use cases.

More advanced markets such as Scandinavia, Italy, and the UK are moving beyond pilots, with a majority of organisations already systematically adopting AI and increasing focus on hybrid and edge deployments to support scale. In contrast, parts of Southern and Eastern Europe remain earlier in their AI journeys, with a higher proportion of organisations still in planning or early development stages. Meanwhile, the Middle East is emerging as a fast-moving growth market, showing strong adoption momentum and a sharp year-on-year increase in interest in advanced and Agentic AI.

Across the region, hybrid deployment models dominate as organisations balance innovation with data sovereignty and operational control. While interest in Agentic AI is accelerating. This signals a broader shift from experimentation toward more autonomous, production-ready AI use cases, even as readiness levels continue to vary by market.

“We’re now seeing clear returns from the AI pilots and proof-of-concepts organizations have invested in, with AI delivering measurable impact across the region. But many are not fully equipped with the skills, governance and readiness needed to scale AI to its full potential. As priorities shift toward Agentic AI, and compliance with regulation such as the EU AI Act becomes imperative, trust and scale must be built in from the start. Those who don’t, risk leaving tangible returns on the table.”

Matt Dobrodziej, President of Europe, Lenovo

Hybrid AI Now Preferred Enterprise Architecture

The research shows that real-world business and financial considerations are accelerating the shift toward hybrid AI. Factors such as data privacy, advanced security requirements, and the need to customise and optimise infrastructure are driving adoption of this model, which blends public cloud, private cloud, and on-premises compute. Nearly three out of five (58%) organisations now prefer hybrid as their primary AI deployment model.

Scalable, high-performing AI infrastructure is a critical enabler of enterprise AI success. Respondents in the region highlighted the importance of compute that is both cost- and energy-efficient. This factor ranked second overall, with many identifying it as key to moving AI from pilots into reliable production.

With AI PCs and edge endpoints central to an effective Hybrid AI strategy and securely running AI workloads locally, deploying AI-capable devices has emerged as the top IT investment priority for 2026.

“CIOs across the region are entering a decisive phase of AI adoption where agentic AI and enterprise-scale inferencing are moving from experimentation to core business priorities,” said Dobrodziej. “To unlock real value, organisations need strong foundations, including secure, energy-efficient infrastructure, flexible hybrid architectures, and AI-capable devices and edge endpoints that bring inference closer to where data is created, and work happens. When combined with the right governance and services, this end-to-end approach enables enterprises to innovate confidently, responsibly, and at scale.” 

Lenovo recently introduced Lenovo Agentic AI, a full-lifecycle enterprise solution for creating, deploying, and managing AI agents, alongside Lenovo xIQ, a suite of AI-native platforms designed to simplify and operationalise AI across the enterprise. Built on the Lenovo Hybrid AI Advantage™, these offerings combine hybrid infrastructure, platforms, and services to address governance, integration, and performance from day one. Supported by the Lenovo AI Library of proven use cases, CIOs can reduce risk, accelerate time-to-value, and scale AI initiatives with greater confidence as they move beyond experimentation.

To further enable real-world deployment, Lenovo ThinkSystem and ThinkEdge inferencing servers help enterprises turn trained models into production-ready, low-latency AI applications across data center, cloud, and edge environments. By enabling faster, more efficient inference at scale, Lenovo helps CIOs bridge the gap between AI ambition and day-to-day business impact.

Building on this end-to-end AI foundation, Lenovo’s Smarter AI for All vision is focused on bringing AI to more people and businesses at scale, from enterprise infrastructure to AI PCs that deliver intelligent, personalised experiences directly to users. As outlined at Lenovo Tech World at CES 2026, Lenovo is advancing this vision across its AI PC and smartphone portfolio, with Lenovo and Motorola Qira representing one example of how personal AI can enhance productivity by understanding context across devices and helping users get things done.

Learn more about how enterprises can accelerate AI adoption with the right infrastructure, governance, and partnerships:Explore the full 2026 CIO Playbook report.

About the CIO Playbook Study

This is the third year of surveying CIOs in Europe and the Middle East, with Lenovo commissioning IDC which conducted research between 16th September 2025 and 17th October 2025. This year’s report draws on insights from 800 IT and business decision makers in Europe and the Middle East. Industries represented include: BFSI, Retail, Manufacturing, Telco/CSP, Healthcare, Government, Education and others.

About Lenovo

Lenovo is a US$69 billion revenue global technology powerhouse, ranked #196 in the Fortune Global 500, and serving millions of customers every day in 180 markets. Focused on a bold vision to deliver Smarter Technology for All, Lenovo has built on its success as the world’s largest PC company with a full-stack portfolio of AI-enabled, AI-ready, and AI-optimized devices (PCs, workstations, smartphones, tablets), infrastructure (server, storage, edge, high performance computing and software defined infrastructure), software, solutions, and services. Lenovo’s continued investment in world-changing innovation is building a more equitable, trustworthy, and smarter future for everyone, everywhere. Lenovo is listed on the Hong Kong stock exchange under Lenovo Group Limited (HKSE: 992) (ADR: LNVGY). To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

  • Data & AI
  • Digital Strategy

Christina Mertens, vice president of business development, EMEA, at VIRTUS Data Centres on designing next gen digital infrastructure

Europe’s digital infrastructure is entering a new phase of development. For more than a decade, growth was concentrated in a small number of metropolitan hubs. This was where connectivity, enterprise demand and financial services created natural centres of gravity for data centres. Cities such as London, Frankfurt, Amsterdam and Paris (FLAP markets) became the backbone of Europe’s cloud and colocation landscape.

That model is now under pressure. Computing power is surging in ways that surpass forecasts made even two years ago. AI training and inference, high performance computing (HPC), analytics and modernised public services all require significant and sustained energy and cooling capacity. McKinsey suggests that global demand for data centre capacity could more than triple by 2030. It’s clear Europe needs more digital infrastructure. However, it needs that infrastructure in places with the headroom and regulatory clarity to support long term expansion. And this is why what are referred to as second-tier locations are becoming critical to expanding Europe’s digital architecture.

In practical terms, second-tier locations are not secondary in importance. They are cities and regional areas outside the most constrained metropolitan centres, where there is greater headroom for power, land and long-term infrastructure planning. Across Europe, this includes parts of regional Germany and Italy, Iberia, the Nordics and areas of the UK outside of London. These locations are now playing a central role in how Europe expands its digital capacity.

Why the Digital Infrastructure Shift is Happening

The primary driver is power. Data centres require sustained, predictable electrical capacity over long periods, particularly as AI workloads increase baseline demand. In dense urban centres, electricity networks are often operating close to their limits, and upgrading them is complex, costly and slow. New substations are difficult to site, transmission upgrades can take many years, and competition for capacity from other sectors is intensifying.

Land availability compounds this challenge. Modern data centres are no longer single buildings inserted into existing industrial estates. They are increasingly campus-based developments, designed to accommodate multiple facilities, on-site substations and future expansion. Securing sites of that scale within major cities is difficult and expensive. And often incompatible with planning frameworks that prioritise mixed-use or residential development.

By contrast, regional and edge-of-city locations offer more physical space and greater flexibility. They make it possible to plan electrical infrastructure coherently from the outset, rather than retrofitting systems around urban constraints. For building services professionals, this changes the nature of both design and delivery.

Delivery Challenges in Regional Locations

While second-tier locations offer more space and flexibility, they are not without challenges. Securing grid capacity remains a critical path issue. It requires close collaboration with transmission and distribution network operators, regardless of geography. In some regions, new infrastructure or upgrades are required to support data centre demand. This can introduce complexity into delivery programmes.

Phased development is another defining characteristic. Many campuses are designed to be built out over several years, sometimes over a decade or more. Electrical and mechanical systems need to be designed and installed in a way that supports this staged approach, maintaining operational efficiency while allowing for expansion.

This places a premium on coordination between designers, contractors, operators and utilities. Clear documentation, consistent standards and long-term programme management become essential, particularly where different phases may be delivered by different teams over time.

Skills and Workforce Considerations

As data centre development spreads across a wider range of locations, skills availability becomes an important consideration. High-voltage electrical expertise, experience with resilient power systems and familiarity with data centre standards are already in demand, and that demand is unlikely to ease.

In regional locations where specialist labour pools may be smaller, there is increased focus on training, apprenticeships and long-term workforce development. From an operator and developer perspective, the ability of contractors and consultants to provide consistent quality across multiple phases is particularly valued on campus-scale projects.

This creates opportunities for building services firms that invest in people and develop repeatable delivery capability. Long-term relationships can be built where teams understand an operator’s standards and are involved across successive phases of development.

The Influence of AI and Higher-Density Workloads

AI is accelerating many of these trends. Training and inference workloads place sustained loads on electrical and cooling systems, increasing the importance of reliability and predictable performance. This reinforces the need for robust primary infrastructure and careful long-term planning.

Second-tier locations make it easier to accommodate these requirements because they allow for comprehensive system design at scale. Space for substations, cooling plant and future expansion can be planned into the site from the beginning, rather than being constrained by surrounding development.

From a building services perspective, this does not necessarily mean radically new technologies, but it does increase the importance of integration, resilience and accurate demand forecasting.

Why this Matters for the Built Environment Sector

The shift toward second-tier locations represents more than a geographical redistribution of data centres. It reflects a broader change in how digital infrastructure is planned, designed and delivered. Larger sites, longer programmes and greater emphasis on early-stage coordination place building services and electrical design at the centre of successful delivery.

For the built environment sector, this creates sustained opportunities across design, construction and operation. Campus developments require ongoing engagement rather than one-off interventions, and they rely on teams that can think beyond individual buildings to system-level performance over time.

Looking Ahead…

So, it’s clear that Europe’s digital infrastructure is becoming more distributed, and that trend is unlikely to reverse. Power constraints, planning pressures and rising digital demand all point toward continued development beyond traditional metropolitan hubs.

Second-tier locations are not a temporary solution. They are becoming a permanent and essential part of Europe’s digital landscape. For building services professionals, understanding how to design and deliver infrastructure at this scale, and over these time horizons, will be increasingly important.

As the next phase of development unfolds, success will depend on careful planning, strong collaboration and a clear understanding of how electrical and mechanical systems underpin the resilience and performance of Europe’s digital future.

Learn more at virtusdatacentres.com

  • Data & AI
  • Digital Strategy

Dan Nichols, Chief Technology Officer at virtualDCS, on why cloud resilience in the financial services sector hinges on shared accountability and an assume-breach philosophy

A powerful catalyst for transformation, the cloud is reshaping how organisations compete in the financial services sector. Beyond significant cost savings and flexibility, leaders are eager to unlock the potential of AI-driven insights, intelligent automation, and real-time business modelling. And, in a space governed so strictly by data sovereignty and privacy policies, the cloud’s ability to localise, encrypt, and control data has made it a key enabler of compliance and customer confidence.

But as threats become more frequent and sophisticated – with attackers now targeting shared platforms and partner supply chains – organisations can no longer rely on their own defences alone. For true digital resilience, shared accountability, collective readiness, and clear governance across every cloud touchpoint are equally non-negotiable.

All Eyes on the Money

The industry sits at a valuable intersection of data, technology, and finance. A combination that makes it uniquely attractive to attackers. It holds some of the world’s most sensitive data, directly underpins the flow of global capital, and operates through deeply complex and interconnected systems. With every integration increasing the risk of exposure. Ultimately, the attack motivation is as simple and relentless as it is in most sectors: monetary gain. Cybercriminals target institutions precisely because of the value at stake and the speed at which disruption translates to loss.

How the Threat Landscape is Evolving

Ransomware groups may see insurers and payment providers as high-yield targets. They understand even seconds of downtime can induce multi-million pound losses. Under pressure to protect customer trust and avoid regulatory penalties, some firms may choose to pay in order to restore their service quickly. This dangerous perception only encourages repeat targeting and paves the way for damage to spread even further. Yet it remains a common response tactic among many.

At the same time, the rise of supply chain and third-party attacks has made it possible for criminals to bypass even the most well-defended cloud environments. By exploiting shared platforms, managed service providers, and cloud-hosted applications, perpetrators can move laterally across multiple organisations at once, amplifying both the reach and impact of their attacks. In other words, infiltrating one vendor’s weakness can cripple an entire network in one carefully coordinated strike. And, since some firms may overlook the cloud’s shared responsibility model – presuming end-to-end security sits solely with their cloud provider – multiple blind spots can inevitably emerge, creating easy openings to exploit.

In an environment where boundaries blur and dependencies multiply, traditional perimeter-based defences are no longer enough. Hybrid and multi-cloud infrastructures demand continuous visibility, faster detection, and coordinated response across every partner and provider. The goal is not simply to prevent breaches, but to withstand and recover from them collectively. It’s about recognising that in today’s ecosystem, no financial institution is secure in isolation.

Inside the Ransomware Economy

Evolving beyond the scattergun attacks of the past, ransomware now operates as a professionalised, profit-driven ecosystem, where malicious actors collaborate, trade intelligence, and lease attack tools much like legitimate software vendors. The rise of ransomware-as-a-service (RaaS) has even lowered the barrier to entry, giving less skilled affiliates access to ready-made payloads and automated encryption kits in exchange for a percentage of the ransom.

What makes it especially destructive is the precision and psychology behind the attacks. Rather than randomly striking, attackers conduct weeks of reconnaissance – learning behaviours, studying employee hierarchies, and identifying systems most critical to operations. They often infiltrate through phishing emails or compromised credentials, quietly moving laterally through the network to gain elevated access. Once embedded, they disable defences, exfiltrate sensitive data, and target backup repositories before finally encrypting production systems.

At that point, the goal shifts from technical control to financial coercion. Victims are locked out of their systems and presented with a ransom note demanding payment, sometimes in cryptocurrency, in exchange for a decryption key. Increasingly, the threat includes public exposure of stolen data – a tactic designed to pressure leadership into paying to protect their reputation and customer trust. Even when ransoms are paid, recovery is rarely clean: data may be incomplete, corrupted, or resold on the dark web, and repeat targeting is common once an organisation is identified as a payer.

It’s this blend of stealth, strategy, and human manipulation that makes ransomware so difficult to defend against. By the time the encryption begins, attackers have already spent weeks ensuring recovery options are limited. This background isn’t designed to scaremonger, but to highlight why resilience must start long before an attack ever reaches the endpoint.

The Foundations of Ransomware Resilience

Ransomware resilience isn’t achieved through a single product or policy – it’s the outcome of strategic, technical, and cultural alignment. Financial institutions, in particular, must approach it as a continuous process of readiness: Anticipating compromise, containing impact, and restoring normality quickly and transparently:

Assume-Breach Philosophy

The first step is shifting from a defensive mindset to an assume-breach philosophy. In practice, this means recognising that even the most sophisticated systems can and will be breached – and building architectures and response strategies designed to limit damage when this happens. It’s a pragmatic approach, grounded in the reality that attackers are increasingly sector agnostic. No organisation is too small or too secure to be targeted, but the financial sector remains a favourite because it offers both high disruption value and potentially significant monetary reward.

Building meaningful resilience, therefore, demands layered defence and disciplined execution. The goal is to slow attackers down at every stage – detecting them early, limiting lateral movement, and ensuring business continuity when systems are disrupted. Behavioural analytics and continuous monitoring can surface and neutralise subtle anomalies that would otherwise go unnoticed – such as phishing, spear phishing, and malware, with email still the number one entry point for ransomware.

Zero Trust & MFA

Meanwhile, zero trust policies and multi-factor authentication methods add a second layer of protection, blocking unauthorised access even if credentials are compromised.

When incidents do occur, a well-practised response framework ensures action is fast and coordinated, minimising disruption across critical systems, with the ability to switch to secure replica environments to keep operations running while remediation takes place. Secure, immutable, air-gapped backups underpin it all, providing a safety net that guarantees recovery can begin from a clean and uncompromised state.

Human readiness is equally critical. Technology can contain an attack, but only people can recover from one effectively. Regular simulation exercises, incident rehearsals, and cybersecurity awareness training help teams respond calmly and cohesively, transforming response from reactive to instinctive. This operational maturity is reinforced by strong governance. Frameworks such as DORA, NIST, and ISO 27001 provide the structure to align technical teams, compliance leads, and executive decision-makers around shared resilience goals. When combined with skilled practitioners and clear accountability, they embed security into ‘business as usual’ – moving resilience from a strategy to a sustained organisational capability.

Why Multi-Layered Backup is Critical

When ransomware strikes, the speed and integrity of data recovery determine whether disruption lasts minutes or days – and whether the impact cascades through wider global markets. As the last and most decisive line of defence when every other control fails, it’s also fundamental to customer trust and compliance. Yet too often, backup is treated as a static safeguard rather than a dynamic resilience layer.

Since modern ransomware often seeks out and encrypts traditional backups first, a single backup copy or centralised repository is no longer sufficient. True resilience today depends on a multi-layered approach – combining offsite or cloud-diverse storage, immutable data copies that cannot be altered or deleted, and isolated environments to protect against lateral movement.

How frequently these backups are tested is equally important. Too often, financial institutions only discover weaknesses when recovery is already underway, at which point strategies can’t be magically strengthened, and it becomes a race against the clock to minimise downtime and reputational fallout. Regular, automated recovery testing changes that dynamic. It not only confirms that files can be restored, but provides verifiable assurance that systems come back online in the correct order, data dependencies remain intact, and teams have the muscle memory to act quickly and confidently when the worst happens.

The Power of Shared Accountability

In a digital economy so deeply interconnected, no organisation operates in isolation. This is especially true in financial services, where supply chains and service providers form the backbone of day-to-day operations. While this interdependence is a strength in many ways, it also means resilience is no longer defined by how well a single institution can defend itself, but by how effectively every partner in its ecosystem upholds their part of the security chain.

This is where shared accountability becomes critical. It recognises that cloud providers, managed service partners, and financial institutions each have distinct but complementary roles to play in securing data, systems, and infrastructure. When accountability is clearly defined – and when partners collaborate rather than operate in silos – visibility improves, incident response accelerates, and the risk of systemic failure decreases.

Shared accountability also extends beyond contractual obligation. It’s about building a culture of collective readiness: sharing intelligence, rehearsing joint incident scenarios, and supporting smaller or less-resourced partners to raise their security baseline. The result is a unified entity capable of anticipating, absorbing, and recovering from disruption together.

Looking Ahead

To view cyberattacks as inevitable might seem pessimistic to some, but it’s an unfortunate truth that no amount of investment can eliminate risk entirely. In an era where threats are growing in both scale and sophistication, readiness becomes the true differentiator – particularly in such a high-stakes sector. For financial institutions, that means embedding security into culture, strengthening connections across supply chains, and continually testing their ability to withstand and recover as a united ecosystem. Only then can resilience become a strategic advantage rather than a defensive necessity, and unlock the cloud’s transformative potential with absolute confidence.

Learn more at virtualcds.co.uk

  • Artificial Intelligence in FinTech
  • Cybersecurity
  • Cybersecurity in FinTech
  • Data & AI
  • InsurTech

Ash Gawthorp, CTO and Co-founder of Ten10, on building the right foundations to shape the AI era in the UK

A recent study shows that UK businesses expect to increase their AI investment by an average of 40 percent over the next two years, following an average spend of £15.94 million this year. With investment surging, the UK is clearly in the fast lane, but the question is whether that momentum will convert into real, durable strength.

This rapid acceleration places the UK at a pivotal moment in its ambition to lead in artificial intelligence. Investment is rising, government focus is strengthening, and organisations across every sector are exploring AI at pace, creating a sense of real momentum. However, anyone who has experienced previous technology cycles will recognise the familiar tension that emerges during periods of rapid progress and optimism. Breakthroughs often attract significant attention and capital before entering a more grounded, sustainable phase.

The pressure today is not on AI as a whole. Instead, it is focused on a specific path, where belief in ever-larger transformer models delivering general intelligence continues to grow. This progress has been remarkable, but it represents only one path within a much broader AI landscape. As excitement reaches its peak, the market will inevitably stabilise. The long-term value will come through robust engineering, strong talent pipelines, and successful deployment in real-world environments.

The task now is to use this moment wisely. Long-term success depends on building deep capability at home, rather than relying on hype or outsourcing key foundations to external providers that sit outside our oversight and control.

The Limits of Scale as Strategy

A significant share of today’s investment is based on the assumption that increasing compute and model size will inevitably lead to artificial general intelligence (AGI). Transformer architectures have delivered extraordinary capability and accelerated progress in ways few predicted. They remain powerful systems for prediction and pattern recognition across language, images and other data.

However, scale is not a guarantee of general reasoning or broad intelligence. Many researchers believe that transformative progress may require developments beyond today’s dominant architecture. If that proves correct, the markets surrounding large closed models will experience a natural cooling. This would be an adjustment based on speculative expectation, not a failure of AI as a discipline. The industry would then shift toward approaches that prize clarity, modularity and measurable outcomes. Engineering discipline and architectural flexibility will matter far more than sheer size.

One Architecture Cannot Become a National Dependency

AI will continue to advance. The question for the UK is whether it builds capability that can evolve alongside that progress, or whether it locks itself to a narrow set of global platforms. A handful of model providers currently influence pricing, model behaviour and development cycles. When enterprises rely entirely on opaque APIs, they inherit changes without knowing why outputs shift, how models adapt or when pricing dynamics move. That introduces fragility that grows over time.

Some experimental use cases can tolerate opacity, but critical public services and regulated industries cannot. Lending, diagnostics, fraud detection and other high-stakes applications demand clarity over how decisions are formed and how logic stands up to scrutiny. In those environments, transparency and auditability shift from abstract ideals to essential operational requirements.

If the UK intends to embed AI deeply into essential systems, it must champion architectures that allow observability, explainability, control and replacement. Dependence on decisions made offshore is not a foundation for long-term strength.

Specialised Agents Reflect How Sustainable Systems Evolve

A practical and resilient approach to AI is already taking shape. Rather than depending on a single model to handle every task, organisations are assembling systems made up of specialised components. This mirrors the way effective teams work, where roles are defined, responsibilities are clear, and handovers are structured. One model transcribes speech, another classifies information, and a third retrieves or summarises content. Each performs a focused function that can be observed, validated and improved.

This modular design makes systems easier to maintain and evolve. New components can be adopted without rewriting entire frameworks. If performance changes or drift appears, individual parts can be evaluated or replaced without widespread disruption. This reflects long-standing engineering principles that value clarity, observability and the ability to substitute components when better options emerge.

Financial efficiency supports this approach as well. Running powerful frontier models for every interaction introduces cost and latency that scale quickly. Task-specific agents can often deliver the same outcome faster and more economically. Across thousands of interactions, the savings and performance gains become significant.

Engineering as the Anchor of Trustworthy AI

As AI becomes embedded in real systems, success relies on foundational engineering practices. Observability, continuous testing, performance monitoring and controlled deployment are essential. These are not new concepts created for AI, but long-established techniques that have been adapted to a new class of technology.

In early exploratory phases, it can be tempting to treat large models as something separate from traditional software systems. However, the moment AI begins to influence real decisions, the fundamentals return. Enterprises must be able to trace behaviour, explain recommendations and ensure consistent reliability, while regulators expect clarity and boards seek evidence-based decisions around technology choices, cost structures and risk.

Organisations that approach AI as engineered infrastructure, rather than a mysterious capability, will be far better equipped to scale safely and confidently.

Building Skills that Make Capability Real

The UK is fortunate to have strong research institutions, a sophisticated regulatory mindset and a robust software talent base. To convert these strengths into durable national advantage, investment in skills must expand beyond narrow data expertise. Data scientists remain crucial, but sustainable AI delivery depends equally on software engineers, cloud specialists, machine learning specialists, testers, governance experts and operational teams who run systems at scale.

Leading organisations recognise that AI delivery is a multidisciplinary effort. As architectures become more modular, value will flow from those who can integrate, monitor and guide AI systems responsibly. The UK must ensure that thousands of professionals have access to this training and experience. Real leadership emerges when capability is widely shared, not concentrated in a small group.

Governance that Accelerates Innovation

Strong governance does not slow innovation. It accelerates meaningful adoption by building confidence. When organisations can demonstrate transparency, control and reliability, AI can extend into more critical functions.

For national strategy, this becomes a competitive advantage. Industries that manage financial and clinical outcomes are not resistant to technology. They simply require evidence that systems behave consistently and transparently. If the UK excels in building AI that is observable, testable and replaceable, trust will grow and adoption will move faster.

Shaping a Resilient AI Future

Every technology cycle begins with excitement and eventually settles into maturity. Those who succeed through this transition are the ones who invest in capability while enthusiasm is high. When the current market resets, leadership will belong to those with engineering depth, system agility, responsible governance and the skills to integrate specialised intelligence across complex environments.

The UK has an opportunity to define this standard. Strength will come from transparency, interoperability and the ability to adapt to model and architecture changes without disruption. It is a quieter strategy than making declarations about imminent artificial general intelligence, yet it builds the resilience required to lead over the long term.

The future will reward systems that can evolve, remain auditable and operate securely at scale. With the right foundation, the UK can shape this era of AI not through scale alone, but through excellence in engineering, governance and talent. That foundation is the true measure of AI power, and now is the moment to build it.

Learn more at ten10.com

  • Data & AI
  • Digital Strategy

Katja Hakoneva, Product Manager at Tuxera, on delivering tomorrow’s data storage security today

Smart meters are no longer just data endpoints. They’re intelligent, connected nodes embedded into the national infrastructure. As energy networks undergo rapid digital transformation, the focus has largely been on secure communications and real-time data transmission. But beneath the surface lies the local data storage, which often becomes a critical blind spot.

Smart meters store large volumes of sensitive data from energy usage profiles to firmware logs and grid event histories on embedded memory. If this information is accessed, altered, or deleted, it can trigger billing inaccuracies, regulatory breaches, and customer mistrust. With meters expected to operate in the field for up to 20 years, data-at-rest security is a critical requirement.

Storage Vulnerabilities: The Silent Cyber Threat

These embedded systems face multifaceted risks. Attackers may gain access to stored data by physically tampering with a meter or exploiting software vulnerabilities that bypass weak authentication. Malicious actors could manipulate logs to alter billing records, mislead consumption analytics, or mask larger cyberattacks on grid infrastructure.

In many cases, such intrusions go undetected until tangible damage, such as lost revenue or reputational fallout. With increasing dependence on smart infrastructure, utilities can no longer afford to treat embedded storage as a passive component.

Counting the Real Costs of Cybersecurity

Securing smart meters comes with technical requirements, as well as, operational and resourcing demands. For many UK manufacturers and utilities, managing cybersecurity internally means building and retaining specialist teams, often requiring three to five full-time professionals to handle vulnerability monitoring, patch management, and threat response throughout the year.

Aligning with regulatory frameworks frequently demands hardware upgrades to handle stronger encryption and secure configurations, impacting Bill of Materials (BOM) costs and development timelines. Many existing software stacks require optimisation to support modern security protocols within resource-constrained devices. These efforts are necessary, with a single undetected cyberattack costing companies an average of $8,851 (≈£6,900) per minute, and the consequences extending beyond financial loss to potential regulatory fines and service disruptions.

The CRA and the new Era of Cyber Regulation

The Cyber Resilience Act (CRA), set to come into force across the EU by 2027, will reshape how connected devices are designed, developed, and supported. For UK-based vendors serving the European market, or collaborating with EU counterparts, compliance with CRA is becoming a strategic imperative.

Key CRA requirements include:

  • Security by design: Devices must be secure from the outset, not retrofitted post-deployment.
  • No known vulnerabilities at market launch: Products must undergo security validation prior to release.
  • Default secure configurations: Devices should avoid insecure settings out of the box.
  • Lifecycle management: Vendors must support patching and vulnerability resolution throughout the device’s operational lifespan.

For smart meters, which often run in the field for two decades or more, the CRA introduces accountability that extends well beyond product launch. Compliance with the CRA will become part of the CE marking process, meaning global manufacturers must align if they wish to sell into the EU energy market.

Engineering Security: Confidentiality, Integrity, and Authenticity

Designing resilient smart meters starts with three pillars:

  • Confidentiality protects sensitive user data from unauthorised access. This includes encrypting both data and encryption keys, restricting user access levels, and securing communication channels.
  • Integrity ensures stored data remains unaltered and trustworthy. Power failures, for instance, can corrupt memory. Using flash-optimised file systems and secure boot processes can prevent such vulnerabilities.
  • Authenticity confirms that firmware and data updates come from trusted sources. Techniques like digital signatures and update validation prevent attackers from injecting malicious code into meters.

Together, these pillars enable smart meters to meet regulatory expectations while protecting both users and grid operations.

Future-proofing Data Storage

Cybersecurity for smart meters is not just a feature; it requires organisational readiness. Frameworks like the CRA, NIST, and IEC 62443 emphasise secure processes, documentation, and people alongside secure products.

For companies looking to prepare, it is smart to start with common pillars such as maintaining up-to-date Software Bills of Materials (SBOMs), conducting regular supply chain and risk assessments, keeping detailed test reports, and establishing clear incident response plans. Internally, training staff on cybersecurity best practices, setting clear data retention policies, and defining access controls and responsibilities are critical steps to ensure cybersecurity is embedded within the culture of the organisation. This approach ensures security is not a one-off compliance task but a sustainable practice that protects smart infrastructure long-term.

Smart meters deployed today could still be operating in the 2040s. This timeline intersects with the anticipated emergence of quantum computing, which may break today’s encryption standards. Though post-quantum cryptography is still evolving, vendors must prepare now to ensure systems remain secure in a post-quantum world. Smart meter software should be designed with cryptographic agility to allow it to adapt and upgrade algorithms as threats evolve.

Lessons from Long-Term Deployment

Smart meters are designed for longevity, but memory wear remains a primary failure point. Meters that lack flash-aware storage systems face early data loss, increasing the cost of maintenance, replacements, and warranty claims.

Utilities and OEMs that embed file systems capable of wear levelling, garbage collection, and secure boot processes have extended meter lifespans by more than 50%, even in challenging conditions. One example showed meters surviving over 15,000 power interruptions without any data loss.

Integrating secure storage delivers operational and commercial benefits. It ensures compliance with CRA and other evolving global frameworks, reduces maintenance and warranty costs, minimises carbon impact through fewer replacements, enhances brand credibility and trust with procurement teams, strengthens the business case for longer-term contracts and partnerships. As the smart energy market matures, these benefits are becoming differentiators, especially as digital infrastructure grows in complexity.

Delivering Tomorrow’s Data Storage Security Today

The next generation of smart infrastructure will be fast and connected, as well as, secure, resilient, and regulation-ready. For vendors and utilities alike, embedding data protection deep into the meter architecture is a business-critical move.

By preparing for the CRA today, smart meter manufacturers will position themselves as forward-thinking, trustworthy partners in tomorrow’s energy ecosystem, delivering technology that’s not only built to last but built to protect today and tomorrow.

Learn more at tuxera.com

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Michael Ault, Country Manager at integrated payments specialists myPOS, offers strategic advice for SMEs looking to scale through digital transformation and diversification

Scaling a small business is one of the most rewarding, yet complex journeys for any entrepreneur. While growth brings opportunities for greater reach, higher revenue, and stronger market presence, it also demands foresight, discipline, and the ability to manage risk strategically. Securely integrating new technology is the main obstacle for 47% of SME’s, even though 76% of these businesses intend to expand their IT investment. This underscores a key point of tension, as many businesses want to grow through digital transformation but struggle to do so securely and sustainably.

The business landscape continues to evolve with changing customer expectations, technology, and economic conditions. For UK SMEs, the key to long-term success lies in achieving growth but also in building resilience. Sustainable scaling comes down to three principles: embracing technology pragmatically, diversifying intelligently, and investing in people and partnerships that strengthen resilience.

Leveraging Digital Transformation

Digital transformation is the foundation of business growth, especially for small business. Cloud-based solutions, automation, and data analytics help to streamline operations, reduce inefficiencies, and create better customer experiences. However, transformation must be purposeful, not performative.

The smartest approach is to scale technology investment incrementally, integrating flexible, modular systems that evolve with business needs. This approach not only lowers risk but also helps ensure digital maturity evolve over time. When SMEs use modular, cloud-based technology, operations run more smoothly and changes can be effectively analysed. Ultimately, resilience is not built through one-time upgrades but through a culture of continuous digital evolution.

Diversifying Revenue Streams

Depending on a single product, service, or market leaves a business vulnerable to sudden changes in demand. Diversification, when guided by customer insight and data can turn volatility into opportunity. Expanding into online sales, introducing subscription models, or targeting fresh customer segments can make income streams much more stable and sustainable.

At myPOS, we know that even simple changes based on data, such as adding additional payment options or tapping into cross-border e-commerce, can help cash flow and protect against market shocks. The goal of technology is to mitigate specific challenges without adding layers of complexity.

Investing in Employee Development

Your people are pivotal to your ability to grow as a business; empowered teams are the engine of sustainable scale. A team that feels supported and motivated will bring fresh ideas, adapt to challenges, and push the business forward. Investing in training, mentoring, and development opportunities builds skills that pay back in the form of innovation and improved performance.

In fast-changing industries, having employees who are confident in learning and adapting can make the difference between struggling through disruption and taking advantage of it. Equally, strong partnerships extend this resilience beyond the organisation. Building resilience at the team level creates resilience for the whole business, so fostering a culture of continuous learning and celebrating employee contributions is key to maintaining motivation.

Focusing on Financial Health and Flexibility

Financial resilience underpins sustainable growth. Scaling often requires upfront investment, and without healthy cash flow or reserves, opportunities can be lost. Monitoring income and expenses closely, cutting unnecessary costs, and preparing for seasonal fluctuations gives businesses more control.

Having flexible financing options, like credit lines, small business loans, or even crowdfunding, provides a level of agility. Instead of being caught off guard by unexpected challenges, businesses with financial flexibility are positioned to respond quickly and strategically.

Financial management software can make it easier to track performance, spot issues early, and forecast future needs. When you can see your finances in real time, you can make proactive, data-driven decisions instead of waiting for problems to happen. In markets that change quickly, this kind of financial management helps small firms plan with confidence, stay flexible, and establish a stronger base for long-term growth.

Prioritising Customer Relationships and Feedback

Your customers are not just buyers; they are advocates, sources of insight, and the foundation of repeat business and brand loyalty. Businesses that scale successfully often place customer relationships at the heart of their strategy by actively gathering feedback, responding quickly to issues, and personalising interactions, which shows customers they are valued.

This loyalty becomes a form of resilience. In periods of uncertainty, a base of satisfied, returning customers provides more stability than constantly chasing new ones. Successful businesses use CRM tools to track customer preferences and automate follow-ups so no opportunity to strengthen a relationship is missed.

Building Strategic Partnerships

Partnerships can accelerate growth while also spreading risk. Working with other businesses, organisations, or influencers can provide access to new audiences, shared expertise, or additional resources. Collaboration can also create opportunities for joint marketing, co-branded initiatives, or innovative product and service offerings.

In times of uncertainty, strong partnerships act as a support network. By aligning with others who share your values and vision, you create opportunities that are mutually beneficial and more resilient than going it alone. It is important to find partners whose goals and audiences complement your own for the best long-term impact.

The next stage of small business success will be defined by resilience rather than speed, the ability to adapt, recover, and continue to create value in the fact of uncertainty. For SMEs, this means developing adaptable growth plans that include flexible technology, diverse models and empowered employees.

Learn more at mypos.com

  • Data & AI
  • Digital Payments
  • Digital Strategy
  • Fintech & Insurtech

Fawad Qureshi, Global Field CTO, Snowflake, on realising possibilities for innovation in this new AI era

Without cloud migration, businesses face the end of innovation. In this new AI era, businesses operating within the closed architectures of legacy systems do not have the flexible, data-driven foundation to engage with these new technologies and ensure a strong pipeline of necessary innovation. And as AI continues to evolve, those not able to keep pace with innovation risk being left behind. 

Cloud migrations are the foundation to modernise and drive business growth over the long term. When organisations migrate to a cloud-based environment, it’s crucial to focus on the tangible business value a migration will deliver, rather than simply shifting from one system to another. Moving a company’s customer-facing applications and all of their data to a cloud-based environment has the benefits that are increasingly real and measurable.

Migration isn’t just a Plug and Play approach – Which migration fits your needs?

There are two approaches to cloud migration, broadly speaking: horizontal and vertical, each with their own benefits and potential challenges. A vertical approach sees organisations migrating applications one by one: this approach is a good choice if certain systems have to be prioritised, or if the applications being migrated do not have many interdependencies. Vertical migration allows for focused efforts and risk management on individual systems, and requires fewer resources. Horizontal migration moves entire system layers at the same time. This is the best solution when businesses have tight deadlines to retire legacy systems, or if their systems are tightly integrated. Horizontal migrations tend to be faster by allowing for parallel work streams, but they require more technical expertise. 

Organisations often adopt a mixture of the two approaches, for example, horizontally migrating important systems such as data platforms, while taking a vertical approach to customer-facing applications. Whatever approach an organisation takes, it’s vital that the migration also includes a culture shift, preparing employees to adapt to new, consumption-based models and the possibilities of the new technology. Migration is also just the start of the journey, unlocking the potential of AI-driven use cases and seamless data collaboration, including new ways to achieve business value. 

Before diving straight in, ensure it’s with a Data-First Mindset

When migrating to the cloud, a data-first approach is essential. For those acting as the catalyst for change, whether that be IT managers or even CIOs, data must be front of mind before planning any successful migration.  Understanding how data is used within the organisations, including its structure, governance needs, and how it delivers value and business outcomes, is imperative. This applies doubly when it comes to large, complex systems with many interconnected applications. 

Before migrating, businesses must comprehensively assess their current ecosystem. It’s imperative that the end-to-end business product survives the migration, intact. Organisations should maintain internal control over core competencies around data, such as business process knowledge, data governance and change management. These areas include institutional knowledge that external parties may not grasp. Businesses should also maintain direct oversight over compliance requirements and risk management. 

Technical activities such as cloud infrastructure optimisation, performance testing, and specialised migration tooling are something, by contrast, that can be handled by external expertise. Code conversion can also benefit from purpose-built tools that use technologies including AI. Technical parts of the immigration tend to evolve rapidly and require specialist knowledge, so are ripe for outsourcing. While doing so, those steering the migration need to ensure clear governance around outsourced activities, including regular knowledge transfer sessions. 

Different parts of the business all have a role to play: IT and engineering lead on technical implementation, handling the technical side of business requirements, while finance will identify ROI opportunities and manage cloud costs. It helps to create a cross-functional steering committee with representation from every department to ensure that different areas of the business are aligned and ready to address challenges. 

Adaptability and Flexibility is the key to business longevity 

Migration is never one-size-fits-all, and business leaders should be prepared to be flexible and adapt. There are multiple kinds of horizontal migration, from a simple ‘lift and shift’ focused on moving systems as they are to a ‘move and improve’ where migration is followed by optimisation to reduce technical debt. They should be ready to adapt at their own pace, choosing data platforms which offer agnostic architecture and the freedom to choose between data models and tools to ensure minimal disruption.

Flexibility is also important in choosing the tools used for migrations. Flexible data platforms will offer the support businesses need to deal with collaboration and governance frameworks. For businesses operating in EMEA, where different countries can have varying policies, pay close attention to issues around data quality, security and compliance, particularly when it comes to data sovereignty and issues around European data residency. 

A Shared Destiny

The shift to the cloud fundamentally changes security. The traditional cloud ‘shared responsibility’ model clearly demarcated duties between the provider and the customer. However, a more advanced approach is emerging: the ‘shared destiny’ model. This model recognises that in the event of a breach, reputational damage affects both parties. This shared risk incentivises the cloud provider to be a more proactive partner, actively helping customers strengthen their security posture rather than simply managing their own side of the demarcation line.

As ‘destinies’ intertwine, you help eliminate the vulnerability created due to password simplicity. Put simply, in a ‘shared responsibility’ model, the cloud provider is only responsible for securing infrastructure, while the customer remains responsible for securing data and apps in the cloud, as well as for configuration. In a ‘shared destiny’ model, the cloud provider plays a more proactive role to ensure that their customers have the best possible security posture. 

Taking a ‘shared destiny’ approach allows businesses to be more proactive in securing data, using approaches such as multi-factor authentication, secure programmatic access and more comprehensive cloud monitoring services. Choosing a modern, AI-driven data platform offers the best security foundations here, offering security controls across cloud service providers and the entire data ecosystem. 

A Pathway to Growth

In today’s world, the bigger risk is standing still. Nothing changes if nothing changes.

If organisations are holding back on innovation due to technological limitation, then the time to migrate is clear. There is no need to face an end to possibilities when the path towards success lies in reach, offering an opportunity to bring businesses up to date with modern requirements, and pave the way for the adoption of technologies such as AI. 

However, as we’ve seen, it’s not just a case of plug and play. Organisations must ensure a flexible, data-driven approach to migration, while keeping security front of mind via a ‘shared destiny’ approach. To deliver this, the right choice of a modern, flexible data platform will ensure the whole organisation can work together effectively and deliver a path to future innovation and growth. 

Learn more at snowflake.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

AI can transform businesses, but is it also opening the door to cyber risks? Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

The AI Boom

AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

So, it’s no surprise that ANS research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

The Governance Gap

While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

The Need for Responsible AI Adoption

To build resilience while embracing AI, businesses need a dual approach: 

1. Prioritise AI-specific training across the workforce

Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

A well-trained workforce is the first and most crucial line of defence.

2. Adopt open-source AI responsibly

Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

Securing the Future of AI

AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

Learn more at ans.co.uk

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Joe Logan, CIO at iManage, on the need to avoid the hype, manage cybersecurity, focus on ROI and balance change management to get the best results with AI

Across the enterprise, AI promises transformational power – however, it’s not as simple as just plugging it into the organisation and instantly reaping the benefits. What are some of the top things CIOs need to focus on to avoid any pitfalls, unlock its value, and best position themselves for success with AI? 

1) Separate the Hype from Reality

Here’s what hype looks like: using AI to “radically transform the way you do business” or to “accelerate comprehensive digital transformation” or – heaven forbid – to “completely change our industry.” These are big statements – and absolutely dripping with hype.

Getting real with AI requires identifying specific use cases within the organisation where a particular type of AI can be deployed to achieve a specific goal. For example, maybe you want to reduce customer churn by 20% and have identified an opportunity to use chatbots powered by large language models to provide more effective customer service. That’s what reality looks like.

In separating the hype from reality, organisations gain the added benefit of clearing up any misconceptions – at any level of the organisation – about what AI can and can’t do, thus performing an important “level set” around expectations.

2) Understand the Implications for Cybersecurity

On one side, any AI tool you’re using has access to data, and that means that access needs to be controlled like any other system within your tech stack. The data needs to be secured and governed, and issues around privacy, sovereignty, and any other regulatory requirements need to be thoroughly addressed.

As part of this effort, organisations also need to be aware of the security measures required to protect the AI model itself from bad actors trying to manipulate that model. For example: prompt injection – inputs that prompt the model to perform unintended actions – can affect the model and its responses if not carefully guarded against.

Securing your AI system is one side of the coin; the other side is understanding how to apply AI to cybersecurity. There are a growing number of use cases here where AI can help identify risks or vulnerabilities by analysing large amounts of data, helping organisations to prioritise the areas they need to focus on for risk mitigation. 

In summary? While any usage of AI will require you to “play defence” on the security front, it will also enable you to “play offence” more effectively. In that sense, AI has multiple implications for cybersecurity.

3) Focus on the Right Kind of ROI

When it comes to ROI for any AI investments, don’t narrowly focus on absolute numbers when it comes to metrics like time savings or cost savings. While well-suited to industrial workplaces that are churning out widgets every day, absolute numbers can be an awkward fit when applied to a knowledge work setting.

The advice here for any knowledge-centric enterprise is: Don’t get hung up on the idea of actual dollars and cents or a specific number – instead, look for a relative improvement from a baseline. So, rather than saying “We’re going to reduce our customer acquisition costs by $100,000 this year”, it’d be more appropriate to focus on reducing existing customer acquisition costs by 10%. Likewise, don’t focus on each junior associate in the organisation completing five more due diligence projects per calendar year; look to complete due diligence projects in 30% less time.

4) Give Change Management its due

Change management has always mattered when it comes to introducing new technology into the enterprise. AI is no different: Successful adoption requires a focus on people, process, and technology – with a particular emphasis on those first two items.

A major challenge is educating the workforce with an eye towards improving their AI literacy – essentially, enabling them to understand what’s possible and how they can apply AI to their daily workflows. 

Know that a centralised model of control that dictates “this is how you can experiment with AI” is probably going to be ineffective. It will be too stifling for innovative individuals in the organisation. Far better to provide centres of excellence or educational resources to those people who are most inclined to take the initiative and move forward with AI experiments in their team or department. 

One caveat here: It’s essential to have guardrails in place as teams and individuals experiment with AI, to prevent misuse of the technology. That’s the tightrope that CIOs need to walk when introducing AI into the organisation. Striking the right balance between “total control” and “freedom to explore, but with appropriate oversight and guardrails”. 

The Future of AI Depends on what CIOs do next

The promise of AI is massive, but only if CIOs adopting the technology focus on the right areas. And that means filtering out the hype, keeping security implications top of mind, redefining ROI, and guiding change with a steady hand. By paying attention to these areas, CIOs can safely navigate a path forward with AI. And ensure that it isn’t just a technology with promise and potential, but one that delivers actual enterprise-wide impact.

Learn more at iManage

  • Cybersecurity
  • Data & AI
  • Digital Strategy

Vertiv expects powering up for AI, Digital Twins and Adaptive Liquid Cooling to shape future Data Centre Design and Operations

Data Centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global leader in critical digital infrastructure. The Vertiv™ Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI, to digital twins, to adaptive liquid cooling.

“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” said Vertiv chief product and technology officer, Scott Armul. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”

The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation:

  • Extreme densification – accelerated by AI and HPC workloads; gigawatt scaling at speed – data centres are now being deployed rapidly and at unprecedented scale
  • Data centre as a unit of compute – the AI era requires facilities to be built and operated as a single system
  • Silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute

The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape.

1.         Powering up for AI

Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.

2.          Distributed AI

The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.

3.          Energy autonomy accelerates

Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.

4.          Digital twin-driven design and operations

With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.

5.          Adaptive, resilient liquid cooling

AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.

Vertiv does business in more than 130 countries, delivering critical digital infrastructure solutions to data centres, communication networks, and commercial and industrial facilities worldwide. The company’s comprehensive portfolio spans power management, thermal management, and IT infrastructure solutions and services – from the cloud to the network edge. This integrated approach enables continuous operations, optimal performance, and scalable growth for customers navigating an increasingly complex digital landscape.

Find out more at Vertiv.com.

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion…

CoreX, a high-growth Elite Consulting and Implementation Partner of ServiceNow and NewSpring Holdings platform company, has announced the successful completion of its acquisition of InSource’s ServiceNow business unit. InSource is a fellow Elite Partner recognised for deep delivery expertise and an unwavering commitment to client success. The transaction officially closed in late December 2025.

This agreement unites two high-performing ServiceNow partners in the ecosystem. Together, CoreX and InSource now operate as a single, purpose-built organisation designed to scale with intent, elevate enterprise transformation outcomes, and meet the accelerating demand for AI-enabled, end-to-end ServiceNow solutions worldwide.

InSource integration into CoreX delivering value for ServiceNoe customers

With InSource’s 1,500+ successful implementations and a 4.76 CSAT rating, the combined organisation, more than doubling its US-based employee headcount, now operates at a level of scale and technical depth that firmly positions CoreX among the top-tier Consulting and Implementation Partners in the global ServiceNow ecosystem. The acquisition doubles the firm’s ServiceNow certifications and brings together advanced platform specialisation and a people-first culture grounded in long-term client success.

“This is not growth for growth’s sake, but rather a strategic, deliberate move of scale,” said Rick Wright, Head of CoreX. “By fully integrating InSource into CoreX, we have created a focused consultancy built for scale, execution, and long-term value for ServiceNow customers.”

Reflecting on the integration, Mark Lafond, former President & CEO of InSource, added, “InSource was built on delivery strength, trust, and long-term client relationships. Joining forces with CoreX allows us to take everything we do best and amplify it on a much larger stage. This is the right home for our people, the right platform for our customers, and the right partner to accelerate the next chapter of growth.”

By unifying CoreX’s innovation roadmap and AI readiness with InSource’s long-standing operational delivery excellence, the combined organisation now offers a truly integrated model for enterprise transformation across industries. This integration enables clients to move faster from strategy to execution while maintaining the governance, resilience, and scalability required for modern enterprises.

Just as importantly, the acquisition strengthens CoreX’s geographic footprint and delivery capacity across key global delivery hubs, including North America and Latin America, enabling the firm to serve enterprise clients with greater speed, continuity, and depth.

“Our acquisition of InSource fundamentally changes the scale of impact we can deliver for customers,” Wright added. “CoreX is now purpose-built to lead the next era of ServiceNow-powered transformation.”

A Unified Approach to Enterprise Transformation

The acquisition significantly enhances CoreX’s capabilities across Strategic Portfolio Management (SPM)IT Asset Management (ITAM)IT Operations Management (ITOM)Integrated Risk ManagementOperational Technology integration, and AI-ready enterprise architecture. The combined strengths allow CoreX to solve more complex, mission-critical challenges across industries, including manufacturing, healthcare, financial services, and the public sector.

With this transaction, CoreX is now among the top global ServiceNow Elite Partners, distinguished not just by certifications or scale, but by consistent delivery of measurable, enterprise-level outcomes on the ServiceNow AI Platform.

About CoreX

Founded in 2023, CoreX is a global ServiceNow consultancy specialising in business-focused transformation that unlocks hidden value from the Now Platform. Backed by unmatched industry leadership, extensive functional experience, and the most seasoned ServiceNow team in the ecosystem, CoreX delivers strategic guidance and AI-enabled innovation to power sustained success. Learn more at corexcorp.com

About NewSpring Holdings

NewSpring Holdings, NewSpring’s majority investment strategy, focused on control buyouts and sector-specific platform builds, brings a wealth of knowledge, experience, and resources to take profitable, growing companies to the next level through acquisitions and proven organic methodologies. Founded in 1999, NewSpring partners with the innovators, makers, and operators of high-performing companies in dynamic industries to catalyze new growth and seize compelling opportunities. Having completed over 250 investments, the Firm manages approximately $3.5 billion across five distinct strategies covering the spectrum from growth equity and control buyouts to mezzanine debt. Partnering with management teams to help develop their businesses into market leaders, NewSpring identifies opportunities and builds relationships using its network of industry leaders and influencers across a wide array of operational areas and industries.

  • Data & AI
  • Digital Strategy

Jan Van Hoecke, VP AI Services at iManage and a highly experienced computer scientist with a passion for technology and problem-solving. on navigating the AI landscape for success in 2026

The AI landscape faces a number of big shifts in 2026. Agentic AI will undergo a reality check as enterprises discover the gap between marketing hype and actual capabilities, while organisations will go through a mindset change from treating AI hallucinations as crises to managing them, acknowledging the inherent limitations of the technology. There will also be a shift in how data will be structured in AI systems, to help the move from just finding facts (“what”) to understanding reasons (“why”).  Middleware application providers will face new challenges, as those vendors controlling both platforms and data will become more influential. Finally, standardised AI chat interfaces will evolve into smarter, dynamically generated, task-specific user experiences that adapt to immediate needs.  

Agentic AI Reality Check  

2026 is the year when agentic AI will get a reality check, as the gap between marketing promises made in 2025 and their actual competencies will become starkly visible. As enterprise adopters share the mixed successes of agentic AI, the market will begin to differentiate between true autonomous agents and the clever workflow wrappers.

Currently, many products promoted as AI agents are, in reality, rigidly programmed systems that simply follow predefined paths. They cannot independently plan or adapt in real-time to accomplish tasks. The current evolution of AI agents closely resembles the development of autonomous vehicles: early self-driving cars could only maintain lane position by relying strictly on preset instructions, and likewise, today’s AI agents are limited to executing narrowly defined tasks within established workflows. True autonomy, where AI agents can dynamically perform and solve complex problems better than humans and without human intervention, remains, for now, an aspirational goal.

AI Hallucination Goes from Crisis to Management

In 2026, the AI hallucination crisis will reach a critical juncture as organisations realise they must learn to coexist with the current fundamentally imperfect technology – until a new technology comes into play that can effectively address the issue. The focus will shift from AI hallucination ‘crisis’ to management.

As the industry deliberates who carries the liability for AI’s mistakes and inaccuracies – the tool makers or the users – enterprises will stop waiting for vendors to solve the problem and take matters into their own hands. They will adopt a variety of pragmatic risk mitigation strategies – from double and triple-checking work, and enforcing human oversight for high-stakes decisions, to taking hallucination insurance policies.

Major model builders acknowledge that current foundational LLM technology cannot eliminate hallucinations and ambiguity through incremental improvements alone. New technology is needed. Until then, and perhaps with the realisation that a technological breakthrough is years away, users will start driving the hallucination conversation – both by building systematic defenses within how they use AI, and forcing vendors to accept shared responsibility through better documentation and clearer model limitations.  

The Next Evolution in AI Data Architecture Lies in a Shift from “What” to “Why”

There will be a fundamental shift in how data is structured for AI systems, driven by the limitations of current approaches in answering complex questions. While Retrieval Augmented Generation (RAG) has proven effective at locating information and answering “what” questions, it struggles with the deeper “why” and “how” inquiries.

This limitation stems from RAG’s flat-file architecture, which excels at locating information but fails to capture the complex interconnections and relationships that underpin meaningful understanding and knowledge, especially in specialised domains like legal and professional services information.

The solution lies in AI-driven autonomous structuring of data. These systems will be better placed (than humans) to reveal critical relationships across multiple data points at scale, also highlighting the contextual dependencies essential for answering the “why” and “how” questions effectively.

Consequently, in 2026, with machines taking the lead, the method of structuring data will undergo a complete transformation, gradually eliminating the human role in creating structure, to reveal the business-critical interconnections across multiple data points.

Middleware AI Apps Squeeze

Given the essential link between data and AI, middleware companies that specialise in building custom applications layered on top of data platforms will begin to get pushed to the margins, forced to compete on niche features – while the core value of data and insight is captured by the platform owners. The true leaders will be those organisations that both own and manage their data, while also offering an AI-powered interface that enables users to interact with their data securely and efficiently, fully leveraging the capabilities of modern AI technology.

Shift to AI-generated, Task-Oriented User Interfaces

In 2026, the current traditional vendor-designed, standard AI chat-based user interfaces will transition to dynamically AI-generated task-specific user interfaces that adapt to users’ immediate needs. This represents a fundamental shift from standardised software – for example, where everyone uses identical Microsoft Word or SharePoint interfaces – to personalised, short-term user interfaces that exist only as long as the user requires them for a specific task.

This transformation will also address the critical pain point that users typically have – i.e, the crushing cognitive load of navigating bloated, feature-rich software. Instead of searching through endless menus in an overstuffed application like Excel, the user will simply state their goal – “Compare the Q3 and Q4 sales figures for our top 5 products and show me a chart” – and the AI will instantly generate a temporary, purpose-built interface – a “micro-app” – solely designed for that one single task.

In the context of dynamically generated user interfaces, both data storage and the creation of bespoke interfaces will be managed by AI. The AI organisations that will truly lead in providing such bespoke user interface-generating capability are those that possess and control their own data.

About iManage

iManage is dedicated to Making Knowledge Work™. Our cloud-native platform is at the centre of the knowledge economy, enabling every organisation to work more productively, collaboratively, and securely. Built on more than 20 years of industry experience, iManage helps leading organisations manage documents and emails more efficiently, protect vital information assets, and leverage knowledge to drive better business outcomes. As your strategic business partner, we employ our award-winning AI-enabled technology, an extensive partner ecosystem, and a customer-centric approach to provide support and guidance you can trust to make knowledge work for you. iManage is relied on by more than one million professionals at 4,000 organisations around the world.

Learn more at imanage.com

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Strategy

Interface issue 68 is live featuring Microsoft, Virgin Media O2, CIBC Caribbean, Telkom, Zoom, ServiceNow, Snowflake and more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Driving Business Transformation Through Cloud & AI

Microsoft’s Shruti Harish, Head of Solution Engineering for Cloud and AI Platforms across the tech giant’s Manufacturing and Mobility vertical, talks to Interface about how to achieve successful AI implementations augmented by Cloud. Our future focused fireside chat covered everything from driving value through cloud modernisation to responsible AI.

“Leaders should align AI initiatives with clear business outcomes and foster a culture that embraces change. The focus is shifting toward AI-operated, human-led models where intelligent agents handle tasks and humans guide strategy.”

Virgin Media O2: Democratising Data as a Cultural Movement

Mauro Flores, EVP for Data Democratisation at Virgin Media O2, talks to Interface about the leading telco’s data journey and how it is supporting colleagues to innovate faster, make smarter decisions and deliver brilliant customer experiences.

Data-driven insights are essential. They’re helping power our decisions like optimising our network performance, anticipating outages before they happen, identifying and preventing fraud, personalising offers and pricing to build customer loyalty, and forecasting demand so we invest in the right things.”

CIBC Caribbean: Shaping the future of Banking in the Caribbean

Deputy CIO Trevor Wood explains how CIBC Caribbean is blending technology, culture, and customer-centricity to deliver seamless digital experiences across the region with a ‘Future Faster’ strategy.

“We want to lead in every market we operate, build maturity across our practices and be architects of a smarter financial future for all.”

And read on for deep AI insights from ANS’s CIO on why AI isn’t just for big business, Emergn’s CTO on how your business can get AI-ready and Kore.ai’s Chief Strategy Officer on taming AI-sprawl with governance-first platforms.

We also hear from Celonis, Snowflake, ServiceNow, Make and Zoom with their tech predictions for 2026 and chart the key dates for your diary with global networking opportunities at the latest tech events and conferences across the globe.

Click here to read the latest edition!

  • Artificial Intelligence in FinTech
  • Data & AI
  • Digital Payments
  • Digital Strategy
  • People & Culture

Santo Orlando, Practice Director – App, Data and AI Services at Insight, on how your organisation can level up with Agentic AI

By now, most of us have heard of Generative AI. Many businesses have already adopted the technology for tasks like customer service, code generation and content creation. Generative AI, however, is only the start; we’re only scratching the surface of the potential that AI has to offer

Enter Agentic AI

Unlike Generative AI, which relies on human input and prompts, Agentic AI can act autonomously to fulfil complex tasks without human intervention. As a result, nearly 45% of business leaders think Agentic AI will outpace Generative AI in terms of impact, and more than 90% expect to adopt it even faster than they did with generative AI. However, despite its promise, our joint understanding of Agentic AI – and how to implement it – is still very much in its infancy.

So, where do you start? To kickstart your Agentic AI journey here are five fundamental steps to consider. 

Generative AI vs Agentic AI

If Generative AI is like having a personal assistant, supporting you one-on-one to speed up your tasks, then Agentic AI is more like having a dedicated team of smart, individual coworkers who can take initiative and get things done across your business – without needing constant oversight. 

One powerful example of this in action is in sales. With Agentic AI, organisations are able to receive real-time insights during discovery calls. The AI ‘agents’ allow sales reps to respond with timely, relevant information, helping them build trust, operate faster and close deals more effectively. 

By collecting and analysing data from across teams, agents can uncover patterns, translate complex metrics into actionable strategies and even highlight opportunities that might otherwise be unintentionally overlooked. In some early implementations, sales teams have reported saving five to ten hours per rep each month – adding up to thousands of hours redirected toward deeper customer engagement.

The one-to-one relationship we’ve grown accustomed to with Generative AI has evolved into the one-to-many dynamic of Agentic AI, which is capable of handling tasks for multiple users and automating entire business processes. Even more impressively, agents can make decisions, control data and take actions on their own. A capability that can seem daunting without a clear understanding of how it works.

That’s why businesses need to start small, and here are a few practical steps to get going quicklyand wisely with agentic AI. 

Step 1: Getting your data ready

Agentic AI is the logical progression for organisations already exploring generative tools. However, the data needs to be in an optimal condition – clean, organised and secure – before autonomous agents can be deployed effectively.

As such, eliminating redundant, outdated and trivial (ROT) data is vital. Without removing ROT, agents may rely on obsolete information, leading to inaccurate or misleading outputs. For example, this could happen if a company deploys an HR chatbot that’s connected to outdated data sources. If an employee were to ask about their 2025 benefits, the chatbot might pull information from as far back as 2017, resulting in confusion and misinformation.

Proper file labelling, standardised document practices and use of version histories in place of multiple saved versions helps to ensure agents access only the most relevant and accurate information.

Step 2: Start with low-risk cases 

Agents work on a transactional basis, charging for each operation, which can quickly add up. As such, it’s wise to experiment with simple, low-stakes applications first. This approach allows for quicker deployment and demonstrates immediate value to the business without significant costs or risks.

One example could be using an agent to assess sentiment in social media responses following a product launch. This can offer real-time feedback on public perception and inform messaging strategies. Other low-risk use cases include generating reactive press releases and monitoring competitor websites. Additionally, prioritising automation of routine tasks, especially those involving platforms like Salesforce, SharePoint, or Microsoft 365, allows teams to maximise impact without costly system overhauls. 

Overall, organisations need to be willing to fail fast and expect failure. It won’t be perfect from the start. However, an experimental pilot approach helps to efficiently refine AI agents, reducing the risk of costly mistakes and making sure that only effective solutions are scaled up.

Step 3: Create a single source of truth

Establishing a dedicated, cross-functional team to explore agentic AI use cases helps prevent siloed adoption and supports enterprise-wide visibility. This team should span as much of the organisation as possible and include representatives from departments such as marketing, finance and technical solutions.

Collaborative workshops can then act as a forum to identify key processes that would benefit from autonomous capabilities and help businesses align potential applications with specific departmental objectives and broader business goals.

Step 4: Learn, learn and learn

Many companies underestimated the importance of training and governance with Generative AI – and Agentic AI is no different. Organisations need to establish clear governance to define how AI agents should and shouldn’t be used, covering not just technical implications, but HR, compliance and risk concerns as well.

Equally, businesses and those employed must understand Agentic AI’s full functionality to get the most out of it. Like with almost all technical training, AI education cannot be viewed as a one-time ‘tick-box’ exercise. Ongoing learning is necessary to keep pace with new capabilities and best practices.

For example, consider what’s already emerging, like security agents that automate high-volume threat protection and identity management tasks; sales agents that find leads, reach out to customers and set up meetings; and reasoning agents that transform vast amounts of data into strategic business insights.   

Step 5: Reviewing ROI

Enthusiasm around Agentic AI is high. But before organisations dive in headfirst, it’s important they first define success. Technology can’t be the solution if there is uncertainty surrounding the goal. Successful deployment requires a clear definition of the problem organisations are looking to solve and knowledge of how to align the solution with measurable business value. Without this, initiatives risk stalling at the experimental stage.

Key performance indicators should also be identified early. These may include increased productivity, time savings, cost reduction or improved decision-making. Establishing these benchmarks and taking a data-driven approach ensures that AI initiatives align with business goals and demonstrate tangible benefits to stakeholders.

Moving forward

The process of switching to Agentic AI is about changing how businesses handle everyday problems with wide ranging effects, not just about using cutting edge technology. Iteration and learning along the way, as well as deliberate, measured adoption are the keys to increasing value. It’s simple. Success with AI starts with small, straightforward actions and use cases.

Learn more at insight.com

  • Data & AI
  • Digital Strategy

Kyle Hill, CTO of leading digital transformation company and Microsoft Services Partner of the Year 2025, ANS, explores how businesses of all sizes can make the most of their AI investment and maintain a competitive edge in an era of innovation

Across the world, businesses are clamouring to adopt the latest AI technologies, and they’re willing invest significantly. According to Gartner, generative AI has produced a significant increase in infrastructure spending from organisations across the last few months, which prompted it to add approximately $63 billion to its January 2024 IT spending forecast. 

Capable of reshaping business operations, facilitating supply-chain efficiency, and revolutionising the customer experience, it’s no wonder major enterprises are keen to channel their budgets towards AI. But the benefits of AI can extend beyond large enterprises and make a considerable difference to small businesses too if adopted responsibly. 

Game-Changing Innovation 

Most SMBs don’t have the same ability for taking spending risks as their larger counterparts, so they need to be confident that any investments they do make are worthwhile. It’s therefore understandable why some might assume it to be an elite tool reserved for the major players.

To understand how SMBs can make the most of their AI investments, it’s important to first look at what the technology can offer. 

Across industries, AI is promising to be a game changer, taking day-to-day operations to a new level of accuracy and efficiency. AI technology can enhance businesses of all sizes by:

Enhancing customer experience

Businesses can use AI tools to process and analyse vast amounts of data – from spending habits and frequent buys to the length of time spent looking at a specific product. They can then use these insights to provide a more tailored experience via personalised recommendations, unique suggestions and substitution offers when a product is out of stock. And, with AI chat functions, businesses can provide more timely responses to any questions or requests, without always needing an abundance of customer service staff on hand. 

    Powering day-to-day procedures

    One of the most common and inclusive uses of AI across organisations is for assisting and automating everyday tasks including data input, coding support and content generation. These tools, such as OpenAI’s ChatGPT and Microsoft Copilot applications, don’t require big investments to adopt. Smaller teams and businesses are already using them to save valuable employee time and resources and boost productivity. This also saves the need for these organisations to outsource these capabilities where they might not have them otherwise. 

      Minimising waste 

      AI is also helping businesses to drive profit, minimising wasted resources, and identifying potential disruptions. By tracking levels of supply and demand, AI can automatically identify challenges such as stock shortages, delivery-route disruptions, or a heightened demand for a particular product. More impressively, however, they are also capable of suggesting solutions to these problems – from the fastest delivery route that avoids traffic, to diverting stock to a new warehouse. Such planning and preparation help businesses to avoid disruptions which costs valuable time, money, and resources. 

        According to Forbes Advisor, 56% of businesses are already using AI for customer service, and 47% for digital personal assistance. If organisations want to keep up with their cutting edge-competitors, AI tools are quickly becoming a must-have for their inventory. 

        For SMBs looking to stay afloat in this competitive landscape of AI innovation, getting the most out of their technological investment is crucial. 

        Laying down the foundations

        Adopting AI isn’t as straightforward as ‘plug and play’ and SMBs shouldn’t underestimate the investment these tools require. Whilst many of the applications may be easy to use, it’s important that business leaders take time to fully understand the technology and its potential uses. Otherwise, they risk missing some major benefits and not getting the most from their investment, particularly as they scale out. 

        Acknowledging the potential risks and challenges of implementing new AI tools can help organisations prepare solutions and ensure that their business is equipped to manage the modern technology. This can help businesses to avoid costly mistakes and hit the ground running with their innovation efforts. 

        SMB leaders looking to implement AI first need to ask the following:

        What can AI do for me? 

        Are day-to-day administration tasks your biggest sticking points? Or are you looking to provide customer service like no-other? Identifying how AI might be of most use for your business can help you to make the most effective investments. It’s also worth considering the tools and applications you already have, and how AI might enhance these. Many companies already use Microsoft Office, for instance, which Microsoft Copilot can seamlessly slot into, making for a much smoother rollout. 

        Can my business manage its data? 

        AI is powered by data, so having sufficient data-management and storage processes in place is necessary. Before investing in AI, businesses might benefit from first looking at managed data platforms and services. This is crucial for providing the scalability, security and flexibility needed to embrace innovation in a responsible and effective way. 

        What about regulation?

        The use and development of AI are becoming increasingly regulated, with legislation such as the EU AI Act providing stringent, risk-based guidance on its adoption. Keeping up with the latest rules and legislative changes is vital. Not only will this help your business to maintain compliance, but it will also help to maintain trust with customers and employees alike, whose data might be stored and processed by AI. Reputational damage caused by a data breach is a tough blow even for big businesses, so organisations would be wise to avoid it where possible. 

        Embracing Innovation

        This new age of AI is exciting; it holds great transformative potential. We’ve already seen the development of accessible, affordable tools, such as Microsoft Copilot, opening a world of new innovative potential to businesses of all sizes. Those that don’t dip their toes in the AI pool risk getting left behind. 

        The question smaller businesses ask themselves can no longer be about whether AI is right for them; instead, it should be about how they can best access its benefits within the parameters of their budget. 

        By thoroughly preparing and taking time to understand the full process of AI adoption, SMBs can make sure that their digital transformation efforts are a success. In today’s world, this is the best way to remain fiercely competitive in a continuously evolving landscape. 

        About ANS

        ANS is a digital transformation provider and Microsoft’s UK Services Partner of the Year 2025. Headquartered in Manchester, it offers public and private cloud, security, business applications, low code, and data services to thousands of customers, from enterprise to SMB and public sector organisations. With a strong commitment to community, diversity, and inclusion, ANS aims to empower local talent and contribute to the growth of the Northwest tech ecosystem. Understanding customers’ needs is at the heart of ANS’s approach, setting them apart from any other company in the industry. 

        The ANS Academy is rated outstanding by Ofsted and offers in-house apprenticeships across a range of technology disciplines. ANS has supported more than 250 apprentices to gain qualifications in the last decade via apprenticeships across technology, commercial, finance, business administration and marketing. 

        ANS owns and operates five IL3‐accredited data centres in Manchester and has an ecosystem of tech partners including Microsoft (Gold Partner), AWS, VMWare, Citrix, HPE, Dell, Commvault and Cisco. It is one of the very few organisations to have received all six of Microsoft’s Solutions Partner Designations. 

        Find out more at ans.co.uk

        • Artificial Intelligence in FinTech
        • Data & AI
        • Digital Strategy

        Jalal Charaf, Chief Digital & AI Officer of the University Mohammed VI Polytechnic (UM6P) and Managing Director of Ecole Centrale Casablanca on how Africa can seize its moment to lead on data

        In today’s world, data is not just about numbers and technology; it shapes how people live, how governments plan, and how businesses grow. It influences who gets a loan, who receives medical care, and who has access to education. That’s why control over data, called data sovereignty, is becoming one of the most important sources of power in the 21st century.

        Unfortunately, Africa is still on the margins of this new reality. Although the continent is home to over 1.4 billion people, 18% of the world’s population, it provides less than 4% of the data used to train today’s most powerful AI systems. Most African data is stored in foreign data centres, beyond the reach of African laws and courts. This is no longer just a ‘digital divide’, it’s a dependence on outside systems that don’t fully understand or represent African realities.

        What’s Holding Africa Back?

        There are several key reasons why Africa remains largely underrepresented in the global digital economy.

        First, representation. Most AI systems are built on data from outside Africa. As a result, they often misjudge or misrepresent African realities, whether it’s credit scoring, medical diagnostics, or speech recognition. The absence of African data creates blind spots that affect real lives.

        Second, infrastructure. Africa captures less than 1% of global cloud revenue and has limited data storage and processing capacity. This forces governments and businesses to rely on distant cloud providers. Outages, costs, or policy shifts in other countries can suddenly disrupt services at home.

        Third, governance. With 29 different national data protection laws, Africa lacks a unified approach to managing data. In contrast, the European Union negotiates data rules as a single bloc. Africa’s fragmented regulatory landscape makes it harder to attract investment or protect citizens’ rights.

        Momentum is Building

        Despite these challenges, there are reasons to be hopeful. Africa’s data centre market is expected to grow by 17.5% in 2025, thanks to rising digital demand and support from investors focused on environmental and social goals.

        Several major projects are already underway. Microsoft and G42 (a technology group from the UAE) are investing $1 billion in a geothermal-powered data centre in Kenya. Equinix, one of the world’s largest data infrastructure companies, plans to spend $390 million expanding into West, South, and East Africa. By the end of this year, Rwanda and Zimbabwe will join the list of countries with carrier-neutral data centres, bringing the total to 26.

        A Blueprint in Morocco

        Morocco offers a model of what digital sovereignty can look like. In June 2025, a consortium led by Nexus Core Systems announced a 500-megawatt, renewables-powered AI infrastructure project on the Atlantic coast. Phase one, with 40 MW of NVIDIA’s Blackwell AI chips, will go live in early 2026, exporting compute power across Europe, the Middle East, and Africa.

        Critically, this infrastructure is under Moroccan jurisdiction, not subject to U.S. laws like the CLOUD Act. The project proves that African countries can host cutting-edge data systems while protecting their own legal and strategic interests.

        How Africa Can Lead

        To turn early momentum into lasting sovereignty, African governments, institutions, and partners must work together across four pillars:

        • Data creation and curation. Countries should invest at least 1% of GDP in digital public infrastructure, such as national ID systems, crop mapping satellites, and open data portals. These systems ensure that African data reflects African lives.
        • Compute and storage. Regions with access to renewable energy can build local ‘green AI corridors’ linked by neutral internet exchanges. This keeps data close to where it’s generated and cuts dependence on foreign servers.
        • Policy and regulation. The African Union should lead a continent-wide Data Sovereignty Compact, a framework to harmonise data protection, localisation, and AI ethics. A unified legal environment will attract investment and support responsible innovation.
        • Talent and research. African universities and public agencies should develop homegrown AI talent. Governments can require that models trained on African data are hosted locally. Research must be rooted in African languages, priorities, and realities, not just imported standards.

        A Role for Everyone: From Governments to Global Partners

        Governments should commit at least 10% of their ICT budgets to data sovereignty and adopt AU-wide standards. Local cloud facilities and fibre infrastructure deserve long-term funding, not just short-term pilots.

        Private industry must shift from short-lived cloud credits to permanent, on-the-ground investment. Companies should publish annual data localisation reports and follow the example set by Nexus Core Systems.

        Development finance institutions (DFIs) should support 20-year infrastructure partnerships, not just one-off tech grants. According to the Global Partnership for Sustainable Development Data, every $1 invested in data systems brings $32 in economic return. That’s a smart investment.

        Universities, civil society groups, and non-profits also have a responsibility. Open data repositories, civic tech labs, and ethical data governance initiatives must be scaled up to support innovation that’s inclusive and local.

        A Strategic Opportunity: OpenAI for Countries

        OpenAI has recently launched an initiative called OpenAI for Countries, designed to help governments build local data centres, train AI systems in national languages, and support start-ups in their own ecosystems. The program is looking for ten partner countries in its first phase. This initiative aligns well with Africa’s goals for sovereign data and democratic AI development.

        Africa’s Moment to Lead on Data

        Africa has everything it needs to become a global leader in digital intelligence. Its young population, growing tech talent, and renewable energy potential are powerful advantages. But sovereignty will not be handed over, it must be built.

        We must act now, before the rules of the digital world are written without us. Morocco’s Nexus Core project shows what’s possible when ambition meets action. It’s time for the rest of the continent to follow suit, and shape a future where Africa owns its data, tells its stories, and sets its own course.

        • Data & AI
        • Digital Strategy

        Cathal McCarthy, Chief Strategy Officer at Kore.ai, on why now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference

        The generative AI boom has triggered a wave of enterprise experimentation. From proof-of-concepts to customer-facing AI Agents, which can be launched at pace but too often in isolation. This comes as MIT’s latest report finds that only 5% of Generative AI pilots are successful, with the majority failing due to poor integration with enterprise systems and in-house implementations without engagement with expert vendors.

        As adoption grows, so does the call for accountability. Control and centralisation is more important than ever. Siloed operations and experimentation pilots have meant that there are a trail of disconnected tools, incomplete experiments and sometimes confusion within enterprises of where AI is being used and who is using it, meaning it can’t be governed effectively.

        Now is the time for enterprises to take stock and set themselves up for a long-term, successful future in applying AI where it can make the most difference. The state of play today shows where clear changes are needed.

        AI Islands

        In a recent report from Boston Consulting Group and Kore.ai, 80% of AI leaders say they now favour platform-based strategies over scattered deployments. These platforms are not just about efficiency; they’re quickly becoming the only viable model for visibility, scalability and governance.

        The consequences of fragmentation are starting to show. CIOs and CTOs are sounding the alarm on siloed AI solutions that make it harder to measure impact, manage risk, or move quickly. This is often the case when AI tools and solutions are implemented in-house and without proven expertise.

        These ‘AI islands’ are hard to govern, expensive to integrate and nearly impossible to scale responsibly. More than half surveyed in the report say current AI solutions are slowing them down and nearly three-quarters highlight explainability and compliance as top concerns. Clearly, connecting these AI islands together via a common platform can offer more long-term benefits such as better governance, faster time to market, and cost consolidation.

        Regulation Demands New Architecture

        Where governance could have been considered a final step by some, it now has to be a design principle from the outset. Transparency, auditability, and oversight must be built into the very fabric of how AI is developed, deployed and monitored.

        Take the EU AI Act for example, the world’s first broad AI law, now applying to general-purpose AI models from August 2nd, 2025. The rules aim to boost transparency, safety and accountability across the AI value chain while preserving innovation.

        According to the BCG report, 74% of leaders believe new regulations will significantly influence how they roll out AI across their organisations. And for good reason. Fragmented systems don’t just introduce inefficiency, they create gaps that regulators, stakeholders and customers are not ready to accept.

        For all the talk of regulation as a constraint, it’s also an opportunity. Regulations should be seen as catalysts, rather than roadblocks. Companies that ensure governance is hard-wired into their AI projects don’t just avoid risk, they create greater trust. And this means greater adoption. This is what leaders need to see, as increased adoption of AI products ensures sustainable, long-term growth.

        Enterprises in industries holding sensitive and personal data like BFSI, healthcare and retail, are already adopting a platform-based approach. Not only does this ensure integration across the business but also means it future proofs compliance, meeting industry and government regulated standards today but also building in parameters for upcoming regulations.

        Gaining Control

        Adopting a platform model doesn’t limit creativity. And it doesn’t mean sacrificing flexibility. Instead of juggling multiple tools, you get one place to plug in what you’ve built and get the best of what’s out there. By running all of your AI capabilities under one unified platform and set of guardrails, your teams across the organisation move forward with one framework, which means, they move faster, make quicker decisions and have a clear understanding of what is – and isn’t – working.

        Most importantly, a platform turns compliance into a competitive and operational advantage. You can swap models, scale pilots and grow without silos tripping you up, and bring centralised control. This momentum is crucial for scaling and growing an organisation. Platforms create the foundation to scale AI responsibly and effectively and that’s key for future-proofing AI projects and creating impact that matters.

        • Data & AI
        • Digital Strategy

        Welcome to the latest issue of Interface magazine! Click here to read the latest edition! USDA: A Fresh Perspective on…

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        USDA: A Fresh Perspective on Digital Service

        This month’s cover story focuses on the digital transformation journey continuing at the United States Department of Agriculture (USDA). In conversation with Fátima Terry, USDA’s former Digital Service Deputy Director, we revisit the sterling work being carried out and find out how technology is being humanised to deliver value to the American people this organisation serves.

        “One of the things we did was partner with multiple USDA teams that focused on customer experience and digital service delivery for their programs,” she explains. “We also partnered with other federal-wide agencies and departments to move forward and evaluate the progress of digital transformation by cross-pollinating success models to everyone connected.”

        Ayoba: A Super-App for Africa

        Ayoba, part of the MTN telco group, is a super-app platform built in Africa, for Africa. Esat Belhan, Chief Technology & Product Officer, reveals how it is bringing more people to digital so they can be tech-savvy and educated on digital capabilities…

        “In order to do that, one thing you could do is give away free data, but that data could be easily wasted on another data-heavy app, like TikTok, in just a couple of hours. So, the real solution is that the valuable and insightful content Ayoba provides should be provided for free, and that we provide instant messaging and short video content, to keep people using our platform for their communication and entertainment needs.”

        Kraft Kennedy: Supporting MSPs with People and Processes

        Nett Lynch, CISO at Kraft Kennedy, explains how the company’s new division, Legion, solves cyber pain-points for MSPs with a collaborative, business-centred approach.

        “A lot of MSPs struggle with client strategy, they’re talking tech instead of business. We’re nerds – we love the tech, we love the features. But we need to admit clients aren’t focused on those things. They don’t necessarily care how or why it works. They just want it to work and align to their business goals.”

        And read on to hear from FICO’s CIO on using AI to transform technical operations; learn from KnowBe4 how AI Agents will be a game changer for tackling cybercrime; and discover how data centres are meeting the demands of the AI boom with Vertiv.

        Click here to read the latest edition!

        • Data & AI
        • Digital Strategy
        • Infrastructure & Cloud
        • People & Culture

        Interface hears from Emergn CTO Fredrik Hagstroem on approaches to AI best practice that can drive positive business transformations

        What does it actually mean for an organisation to be AI-ready, beyond having the right tools and data

        “Being AI-ready is fundamentally about openness to learning and the ability to react quickly. While having the right tools and well-managed data is essential, true readiness is defined by an organisation’s capacity to operate, monitor, and measure the effectiveness of AI solutions.

        We often see organisations invest heavily in implementation and tooling, only to realise that no one is prepared to take responsibility for running, monitoring, and improving AI systems.

        AI-savvy organisations design solutions differently depending on the type of work, operational versus knowledge work, and, for knowledge work, focus on measuring effectiveness rather than just productivity.”

        Where do most companies go wrong when trying to embed AI into their operations?

        “Many companies treat AI solutions like traditional IT projects, using user acceptance as a checkpoint between development and handover to IT operations. This approach often fails before it even begins.

        AI performs tasks that typically require human intelligence, perception, reasoning, and decision-making. While AI can execute these tasks with far greater precision and consistency than humans, someone within the organisation remains ultimately accountable for the results.

        The most common misstep is underestimating the need to provide users with the right level of oversight and control so they can accept accountability for AI-driven decisions.

        For example, explaining how AI decisions are made and demonstrating that they are ethical and fair depends not only on transparency and traceability but also on maintaining control and proper training data records.”

        How can leaders prevent transformation fatigue during AI-driven change initiatives?

        “Change is inevitable, so responding to it is part of effective leadership. AI will transform how businesses operate, but transformation fatigue arises when people feel constantly subject to change rather than in control of it.

        Deliberate planning and thoughtful communication help, but the most effective approach is to empower people to feel more in control. This often involves organising teams around value streams that cut across business, technology, and operations.

        Leaders can ensure teams have the skills and information necessary to take ownership of outcomes and make adjustments based on real results. This is especially important with AI solutions, which should be structured to provide continuous feedback, allowing teams to monitor performance, improve models, and refine processes based on learning.”

        What kind of mindset and cultural shift is required for AI to deliver long-term value?

        “Delivering long-term value from AI requires a shift from control to collaboration, and from predictability to adaptability. Organisations focused on individual targets and siloed accountability often struggle to realise AI’s full potential.

        Value emerges when teams adopt a collective mindset, defining success by shared outcomes, whether customer experience, business impact, or strategic growth. Individual productivity only matters when it benefits the whole system.

        Another critical shift is embracing uncertainty. Traditional corporate cultures often reward certainty and fixed plans. Cultures that support experimentation, feedback loops, and incremental change are more likely to see lasting benefits from AI.

        This cultural evolution isn’t just about tools; it’s about how work is structured, how teams interact, and how decisions are made. Empowering teams to act fast, learn fast, and improve fast is central to sustaining AI-driven value.”

        How can organisations balance AI experimentation with maintaining trust, transparency, and alignment with business goals?

        “Each AI initiative should be evaluated based on the type of work and value it aims to deliver, whether efficiency, experience, or innovation. Different goals require different levels of oversight and distinct success metrics, making a portfolio approach to investment essential. Maintaining alignment with business goals means focusing on outcomes rather than outputs.

        This requires systems where feedback, transparency, and learning are built in from the start, allowing initiatives to fail gracefully. Trust begins with a clear governance framework, as AI, like any transformative technology, can have unintended consequences. Transparency is not just audit trails; it’s about inviting dialogue, sharing lessons learned, and adapting as standards and regulations evolve.

        Experimentation and learning go hand in hand. Delivering incremental value early builds credibility and transparency, helping teams understand what works and what doesn’t. Ultimately, AI is only valuable to the extent that it drives the business toward its strategic goals.”

        How do organisations deal with some of the risks associated with AI – hallucinations, privacy issues, etc. – and how do they go about both securing essential data and overcoming employee resistance to the technology?

        “Treating AI adoption as an iterative, feedback-driven process is key to managing risks. Success is less about getting everything perfect from the start and more about structuring work to minimise unintended consequences and adapt quickly.

        “Hallucinations” is a misleading term. Today’s AI doesn’t imagine things; it follows programmed rules based on probabilities and patterns. Like any software, AI carries risks of errors or mismanaged data.

        What is new is how AI uses data, to train models that imitate human decision-making. Without careful management, models can produce biased or unethical outcomes. Technology does not remove employee accountability. Recognising this allows organisations to design AI solutions with lower risk.

        Designing solutions with humans in the loop is critical. It promotes transparency and explainability and is the most effective way to overcome resistance while maintaining control over outcomes.”

        Find out more from Emergn

        • Data & AI
        • People & Culture

        Robert Cottrill, Technology Director at digital transformation company ANS, explores how businesses can harness the potential of AI while mitigating the growing risks to cybersecurity and privacy

        AI can transform businesses, but is it also opening the door to cybersecurity risks?

        Fuelled by competitive pressure and rising government support through the UK’s Industrial Strategy, it’s no surprise that more and more businesses are racing to adopt AI.

        But there’s a catch. The more businesses scale their AI adoption, the bigger their attack surface becomes. Without a proactive and structured approach to securing AI systems, organisations risk trading short-term efficiencies for long-term vulnerabilities.

        The AI Boom

        AI investment is skyrocketing. Businesses are deploying generative AI tools, machine learning models, and intelligent automation across nearly every function, from customer service and fraud detection to supply chain optimisation. Platforms like DeepSeek and open-source AI models are now part of the mainstream tech stack.

        Initiatives like the UK’s AI Opportunities Action Plan are fuelling experimentation and adoption. AI is now seen not just as a productivity tool, but as a critical lever for digital transformation.

        However, the rapid pace of AI deployment is outpacing the development of the security frameworks required to protect it. When integrated with sensitive data or critical infrastructure, AI systems can introduce serious risks if not properly secured. These risks include data leakage through AI prompts or model training, as well as AI-generated phishing and social engineering attacks

        So, it’s no surprise that our research found that data privacy is the top concern for businesses when adopting AI. As these threats evolve, businesses must treat AI not just as an enabler, but also as a potential vector for attack.

        The Governance Gap

        While technical threats often take centre stage, businesses also can’t forget the increasing regulatory requirements surrounding AI. 

        As AI systems become more powerful, enabling businesses to extract valuable insights from vast datasets, they also raise serious ethical and legal challenges. 

        Regulatory frameworks like the EU AI Act and GDPR aim to provide guardrails for responsible AI use. But these regulations often struggle to keep up with the rapid advancements in AI technology, leaving businesses exposed to potential breaches and misuse of personal data.

        The Need for Responsible AI Adoption with Cybersecurity

        To build resilience while embracing AI, businesses need a dual approach: 

        1. Prioritise AI-specific training across the workforce

        Cybersecurity teams are already stretched. Introducing AI into the mix raises the stakes. Organisations must prioritise upskilling their cybersecurity professionals to understand how AI can both protect and threaten systems.

        But this isn’t just a job for the security team. As AI tools become embedded in daily workflows, employees across functions must also be trained to spot risks. Whether it’s uploading sensitive data into a chatbot or blindly trusting algorithms, human error remains a major weak point.

        A well-trained workforce is the first and most crucial line of defence.

        2. Adopt open-source AI responsibly

        Another key strategy for reducing AI-related risks is the responsible adoption of open-source AI platforms. Open-source AI enhances transparency by making AI algorithms and tools available for broader scrutiny. This openness fosters collaboration and collective innovation, allowing developers and security experts worldwide to identify and address potential vulnerabilities more efficiently.

        The transparency of open-source AI demystifies AI technologies for businesses, giving them the confidence to adopt AI solutions while ensuring they stay alert about potential security flaws. When AI systems are subject to global review, organisations can tap into the expertise of a diverse and engaged tech community to build more secure, reliable AI applications.

        To adopt responsibly, businesses need to ensure that the AI they are using aligns with security best practices, complies with regulations, and is ethically sound. By using open-source AI responsibly, organisations can create more secure digital environments and strengthen trust with stakeholders.

        Securing the Future of AI

        AI is a transformative force that will redefine cybersecurity. We’re already seeing AI being used to automate threat detection and response. But it’s also powering more advanced attacks, from deepfake impersonation to large-scale automated exploits.

        Organisations that succeed will be those that embed cybersecurity into every stage of their AI journey, from innovation to implementation. That means making risk management part of the innovation conversation, not a downstream fix.

        By taking a responsible approach, investing in training, leveraging open-source AI wisely, and embedding cybersecurity into every layer of the business, organisations can unlock AI’s potential while defending against its risks.  

        AI is a double-edged sword, but with thoughtful adoption, businesses can confidently navigate the complex landscape of AI and cybersecurity.

        • Cybersecurity
        • Data & AI

        Anna Collard, SVP Content Strategy & Evangelist KnowBe4 – Africa, on leveraging AI-driven cybersecurity systems to fight cybercrime

        Artificial Intelligence is no longer just a tool. It is a game-changer in our lives, our work as well as in both cybersecurity and cybercrime. While businesses leverage AI to enhance defences, cybercriminals are weaponising AI to make these attacks more scalable and convincing​.  

        In 2025, research shows AI agents, or autonomous AI-driven systems capable of performing complex tasks with minimal human input, are revolutionising both cyberattacks and cybersecurity defences. While AI-powered chatbots have been around for a while, AI agents go beyond simple assistants. They function as self-learning digital operatives that plan, execute, and adapt in real time. These advancements don’t just enhance cybercriminal tactics, they may fundamentally change the cybersecurity battlefield. 

        How Cybercriminals Are Weaponising AI: The New Threat Landscape 

        AI is transforming cybercrime, making attacks more scalable, efficient, and accessible. The WEF Artificial Intelligence and Cybersecurity Report (2025) highlights how AI has democratised cyber threats. Thus enabling attackers to automate social engineering, expand phishing campaigns, and develop AI-driven malware​. Similarly, the Orange Cyberdefense Security Navigator 2025 warns of AI-powered cyber extortion, deepfake fraud, and adversarial AI techniques. And the 2025 State of Malware Report by Malwarebytes notes, while GenAI has enhanced cybercrime efficiency, it hasn’t yet introduced entirely new attack methods. Attackers still rely on phishing, social engineering, and cyber extortion, now amplified by AI. However, this is set to change with the rise of AI agents. Autonomous AI systems are capable of planning, acting, and executing complex tasks—posing major implications for the future of cybercrime. 

        Here is a list of common (ab)use cases of AI by cybercriminals:  

        AI-Generated Phishing & Social Engineering 

        Generative AI and large language models (LLMs) enable cybercriminals to craft more believable and sophisticated phishing emails in multiple languages. Without the usual red flags like poor grammar or spelling mistakes. AI-driven spear phishing now allows criminals to personalise scams at scale, automatically adjusting messages based on a target’s online activity. AI-powered Business Email Compromise (BEC) scams are increasing. Attackers use AI-generated phishing emails sent from compromised internal accounts to enhance credibility​. AI also automates the creation of fake phishing websites, watering hole attacks and chatbot scams. These are sold as AI-powered ‘crimeware as a service’ offerings, further lowering the barrier to entry for cybercrime​. 

        Deepfake-Enhanced Fraud & Impersonation 

        Deepfake audio and video scams are being used to impersonate business executives, co-workers or family members to manipulate victims into transferring money or revealing sensitive data. The most famous 2024 incident was UK based engineering firm Arup that lost $25 million after one of their Hong Kong based employees was tricked by deepfake executives in a video call. Attackers are also using deepfake voice technology to impersonate distressed relatives or executives, demanding urgent financial transactions.  

        Cognitive Attacks  

        Online manipulation—as defined by Susser et al. (2018)—is “at its core, hidden influence, the covert subversion of another person’s decision-making power”. AI-driven cognitive attacks are rapidly expanding the scope of online manipulation. By everaging digital platforms, state-sponsored actors increasingly use generative AI to craft hyper-realistic fake content. They are subtly shaping public perception while evading detection. These tactics are deployed to influence elections, spread disinformation and erode trust in democratic institutions. Unlike conventional cyberattacks, cognitive attacks don’t just compromise systems—they manipulate minds, subtly steering behaviours and beliefs over time without the target’s awareness. The integration of AI into disinformation campaigns dramatically increases the scale and precision of these threats, making them harder to detect and counter.  

        The Security Risks of LLM Adoption 

        Beyond misuse by threat actors, business adoption of AI-chatbots and LLMs introduces significant security risks. Especially when untested AI interfaces connect the open internet to critical backend systems or sensitive data. Poorly integrated AI systems can be exploited by adversaries. This enables new attack vectors, including prompt injection, content evasion, and denial-of-service attacks. Multimodal AI expands these risks further, allowing hidden malicious commands in images or audio to manipulate outputs.  

        Moreover, many modern LLMs now function as Retrieval-Augmented Generation (RAG) systems. Dynamically pulling in real-time data from external sources to enhance their responses. While this improves accuracy and relevance, it also introduces additional risks, such as data poisoning, misinformation propagation, and increased exposure to external attack surfaces. A compromised or manipulated source can directly influence AI-generated outputs. Potentially leading to incorrect, biased, or even harmful recommendations in business-critical applications. 

        Additionally, bias within LLMs poses another challenge. These models learn from vast datasets that may contain skewed, outdated, or harmful biases. This can lead to misleading outputs, discriminatory decision-making, or security misjudgements, potentially exacerbating vulnerabilities rather than mitigating them. As LLM adoption grows, rigorous security testing, bias auditing, and risk assessment, especially in RAG-powered models, are essential to prevent exploitation and ensure trustworthy, unbiased AI-driven decision-making. 

        When AI Goes Rogue: The Dangers of Autonomous Agents 

        With AI systems now capable of self-replication, as demonstrated in a recent study, the risk of uncontrolled AI propagation or rogue AI – AI systems that act against the interests of their creators, users, or humanity at large – is growing. Security and AI researchers have raised concerns that these rogue systems can arise either accidentally or maliciously. Particularly when autonomous AI agents are granted access to data, APIs, and external integrations. The broader an AI’s reach through integrations and automation, the greater the potential threat of it going rogue. This means robust oversight, security measures, and ethical AI governance essential in mitigating these risks. 

        The Future of AI Agents for Automation in Cybercrime 

        A more disruptive shift in cybercrime can and will come from AI Agents. These transform AI from a passive assistant into an autonomous actor capable of planning and executing complex attacks. Google, Amazon, Meta, Microsoft, and Salesforce are already developing Agentic AI for business use. However, in the hands of cybercriminals, its implications are alarming. These AI agents can be used to autonomously scan for vulnerabilities, exploit security weaknesses, and execute cyberattacks at scale. They can also allow attackers to scrape massive amounts of personal data from social media platforms. They can automatically compose and send fake executive requests to employees. And, for example, analyse divorce records across multiple countries to identify individuals for AI-driven romance scams, orchestrated by an AI agent. These AI-driven fraud tactics don’t just scale attacks, they make them more personalised and harder to detect. Unlike current GenAI threats, Agentic AI has the potential to automate entire cybercrime operations, significantly amplifying the risk​. 

        How Defenders Can Use AI & AI Agents 

        Organisations cannot afford to remain passive in the face of AI-driven threats. Security professionals need to remain abreast of the latest developments. Here are some of the  opportunities in using AI to defend against AI:  

        AI-Powered Threat Detection and Response

        Security teams can deploy AI and AI-agents to monitor networks in real time, identify anomalies, and respond to threats faster than human analysts can. AI-driven security platforms can automatically correlate vast amounts of data to detect subtle attack patterns. These might otherwise go unnoticed. AI can create dynamic threat modelling, real-time network behaviour analysis, and deep anomaly detection​. For example, as outlined by researchers of Orange Cyber Defense, AI-assisted threat detection is crucial as attackers increasingly use “Living off the Land” (LOL) techniques that mimic normal user behaviour. Making it harder for detection teams to separate real threats from benign activity. By analysing repetitive requests and unusual traffic patterns, AI-driven systems can quickly identify anomalies and trigger real-time alerts, allowing for faster defensive responses. 

        However, despite the potential of AI-agents, human analysts still remain critical. Their intuition and adaptability are essential for recognising nuanced attack patterns. They can leverage real incident and organisational insights to prioritise resources effectively. 

        Automated Phishing and Fraud Prevention

        AI-powered email security solutions can analyse linguistic patterns, and metadata to identify AI-generated phishing attempts before they reach employees, by analysing writing patterns and behavioural anomalies. AI can also flag unusual sender behaviour and improve detection of BEC attacks​. Similarly, detection algorithms can help verify the authenticity of communications and prevent impersonation scams. AI-powered biometric and audio analysis tools detect deepfake media by identifying voice and video inconsistencies. However, real-time deepfake detection remains a challenge, as technology continues to evolve. 

        User Education & AI-Powered Security Awareness Training

        AI-powered platforms deliver personalised security awareness training. They can simulate AI-generated attacks to educate users on evolving threats, helping train employees to recognise deceptive AI-generated content​. And strengthen their individual susceptibility factors and vulnerabilities.  

        Adversarial AI Countermeasures

        Just as cybercriminals use AI to bypass security, defenders can employ adversarial AI techniques. For example, deploying deception technologies – such as AI-generated honeypots – to mislead and track attackers. As well as continuously training defensive AI models to recognise and counteract evolving attack patterns. 

        Using AI to Fight AI-Driven Misinformation and Scams

        AI-powered tools can detect synthetic text and deepfake misinformation, assisting fact-checking and source validation. Fraud detection models can analyse news sources, financial transactions, and AI-generated media to flag manipulation attempts​. Counter-attacks, like those shown by research project Countercloud or O2 Telecoms AI agent “Daisy” show how AI based bots and deepfake real-time voice chatbots can be used to counter disinformation campaigns as well as scammers by engaging them in endless conversations to waste their time and reducing their ability to target real victims​. 

        In a future where both attackers and defenders use AI, defenders need to be aware of how adversarial AI operates. And how AI can be used to defend against their attacks. In this fast-paced environment, organisations need to guard against their greatest enemy: their own complacency. While at the same time considering AI-driven security solutions thoughtfully and deliberately. Rather than rushing to adopt the next shiny AI security tool, decision makers should carefully evaluate AI-powered defences to ensure they match the sophistication of emerging AI threats. Hastily deploying AI without strategic risk assessment could introduce new vulnerabilities, making a mindful, measured approach essential in securing the future of cybersecurity.  

        To stay ahead in this AI-powered digital arms race, organisations should:  

        • Monitor both the threat and AI landscape to stay abreast of latest developments on both sides. 
        • Train employees frequently on latest AI-driven threats, including deepfakes and AI-generated phishing. 
        • Deploy AI for proactive cyber defense, including threat intelligence and incident response. 
        • Continuously test your own AI models against adversarial attacks to ensure resilience. 
        • Cybersecurity
        • Data & AI

        Enterprise-wide AI platform security protects sensitive data and governs integrations to help organisations scale Agentic AI with confidence

        ServiceNow the AI platform for business transformation, has unveiled its new Zurich platform release. It delivers breakthrough innovations with faster multi-agentic AI development, enterprise-wide AI platform security capabilities, and reimagined workflows. New intelligent developer tools enable secure vibe coding with natural language. This helps turn employees into high-velocity builders and creators and lower the barrier to app creation. Built-in security capabilities, including ServiceNow Vault Console and Machine Identity Console, natively secure sensitive data across workflows. This governs integrations to help organisations scale Agentic AI and innovations with confidence. The introduction of autonomous workflows turns data into action through agentic playbooks. Uniquely offering the flexibility to apply AI and human input in workflows where and when it’s needed for greater control and efficiency. 

        AI Transformation with ServiceNow

        Enterprise leaders are racing to move beyond table-stakes AI implementations to unlock transformative, tangible results.  According to Gartner, “By 2029, over 60% of enterprises will adopt AI agent development platforms to automate complex workflows previously requiring human coordination.” The ServiceNow AI Platform delivers this transformational promise across the enterprise. It underpins a new era of highly efficient human-AI collaboration. 

        “Zurich marks a turning point for enterprise AI. ServiceNow is delivering multi-agentic AI systems in production that are not just powerful, but governable, secure, and built for scale,” said Amit Zavery, president, COO, and chief product officer at ServiceNow. “We are transforming the enterprise tech stack to be AI-native. From autonomous workflows that act on data with precision, to developer tools that democratise high-velocity innovation. With built-in controls for security, risk, and compliance, we’re helping organisations move beyond experimentation. And into a new era of intelligent execution.” 

        Vibe Coding Meets Enterprise Scale 

        According to Gartner, “Agentic AI features will be near ubiquitous, embedded in software, platforms and applications, transforming user experiences and workflows.” The introduction of ServiceNow Build Agent and Developer Sandbox provides resources for employees to work with AI more efficiently. They can now do this conversationally, and at scale, to solve real problems in every corner of the business. 

        • Build Agent is a breakthrough for enterprise app creation—bringing vibe coding to the rigor of the ServiceNow AI Platform. In seconds, employees can turn an idea into a production-ready application by asking in natural language. Say, “Create an onboarding app that assigns tasks to HR, IT, and Facilities,” and Build Agent handles the rest. Design, build, logic, integrations, testing, and industry-leading governance included. What sets it apart is enterprise discipline: every app comes with audit trails, security, and compliance built in. Developers and citizen creators alike get the speed of AI with the confidence of enterprise-grade control, in a streamlined interface. 
        • Developer Sandbox empowers developers to build better applications, faster, while maintaining the highest standards of quality. Sandboxes provide isolated environments within a single instance, so multiple teams can collaborate, build, and test new features without conflicts, and rapid scale doesn’t come at the cost of control. Teams can version, iterate, and deliver without waiting in line for developer resources. Developers can safely experiment with vibe coding, test AI-powered workflows, and resolve version control issues before changes go live. This reduces rework, shortens feedback loops, and helps teams ship higher-quality applications rapidly with lower risk. 

        Security That Enables AI Strategy 

        As enterprises adopt autonomous workflows powered by agentic AI, securing how these systems access data and communicate across environments is essential. Zurich introduces new built-in AI platform security capabilities to make it easier to protect sensitive information. It can also govern integrations and manage growing AI footprints. 

        • The newServiceNow Vault Console provides a guided experience to discover, classify, and protect sensitive data across workflows. For example, an admin managing customer service operations can now identify personal data across tickets, apply different types of protection policies, and track compliance activity. The console also offers recommendations for protecting newly discovered sensitive data, along with customizable dashboards to monitor key metrics. What used to require manual configuration across multiple tools can now be managed in one place, with intelligent insights and a streamlined experience. 
        • Machine Identity Console addresses the need for integration security with enterprise-grade authentication and authorization, delivering control over bots and APIs head on. As the ServiceNow AI Platform scales, every API connection, including those from AI agents, introduces another identity to manage and determine what it can access. This console gives platform teams visibility into all inbound API integrations using machine identities such as service accounts and keys, flags outdated or weak authentication methods, and provides clear steps to strengthen security. If an integration is using basic authentication or hasn’t been active in 100 days, the console spots it and helps resolve it. 

        Digital Transformation

        “At Kanton Zürich, digital transformation is central to how we deliver secure and efficient public services. Since 2018, ServiceNow has enabled us to centralize and standardize our processes with data security as a top priority,” said Jürg Kasper, head of business solutions, Kanton Zürich. “Zurich’s latest advancements in both security and AI will allow us to automate more complex workflows, unlocking new efficiencies that enhance how we serve our citizens—with greater speed, clarity, and assurance.”  

        Without built-in security and trust, scaling AI comes with risk. These new security features in Zurich build upon ServiceNow’s AI Control Tower, announced in May 2025, which provides enterprise-wide visibility, embedded compliance, and end-to-end lifecycle governance for Agentic AI systems. By centralising oversight of every AI agent, model, and workflow, native or third-party, the AI Control Tower ensures organisations can scale AI with confidence, aligning innovation with enterprise-grade security and trust. 

        Turn Data Into Outcomes With Autonomous Workflows 

        As organisations rapidly scale AI, they face the added challenge of delivering solutions consistently, reliably, and responsibly. Enterprises need the right guardrails, full visibility, and strong governance to achieve service delivery. Or they risk eroding trust and slowing results. ServiceNow’s AI Platform does all this in a single platform. It sets a new standard for how organisations can create autonomous workflows to turn data into action and AI into measurable business impact. 

        • Agentic playbooks from ServiceNow bring people, automation, and AI together seamlessly, powering autonomous workflows. A traditional playbook is a structured sequence of automated steps. These are based on predefined business rules and processes—ideal for ensuring consistency, efficiency, and trust. Agentic playbooks amplify this model by embedding AI into the trusted framework. AI agents eliminate manual effort, completing tasks in seconds and accelerating execution. This frees employees to focus on higher-value work where human judgment matters most. For example, in a credit card support situation, an agentic playbook can guide an AI agent to verify someone’s identity. It can freeze a card, send a replacement and notify the customer while allowing a human agent to step in. The result: governed, efficient, and trusted work—supercharged by AI to deliver faster, smarter outcomes. 
        • The ServiceNow Zurich platform release also seamlessly combines Process and Task Mining insights within a unified platform. These new capabilities give organisations an end-to-end understanding of how work gets done. Revealing where human expertise is essential, and where AI agents can deliver the greatest impact. With process intelligence built directly into the platform, customers can move seamlessly from insight to action. Streamlining operations, applying AI where it matters most. And accelerating real business outcomes without the complexity of disconnected legacy tools. 

        All features announced as part of the ServiceNow AI Platform Zurich release are generally available and can be found in the ServiceNow Store

        • Data & AI
        • Digital Strategy

        TechEX Europe – Powering the Future of
        Enterprise Technology at Amsterdam’s RAI Arena September 24-25

        TechEx Europe unites five leading enterprise technology events — AI & Big DataCyber SecurityData CentresDigital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.

        From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.

        AI & Big Data Expo

        The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP

        Speakers include:

        Cybersecurity & Cloud Expo

        The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence,  DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.

        Speakers include:

        IOT Tech Expo

        IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

        Speakers include:

        Digital Transformation

        The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

        Speakers include:

        Data Center Expo

        The Data Centre Expo and conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre. 

        Speakers include:

        Book your place at TechEx Europe 2025 now!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Infrastructure & Cloud

        Join thousands of data centre industry leaders and innovators at London’s Business Design Centre for three co-located events – DCD>Connect, DCD>Compute and DCD>Investment September 16-17

        Data Center Dynamics (DCD) is connecting the data center ecosystem. Secure your pass for three-colocated events covering the entire digital infrastructure ecosystem across two days at London’s Business Design Centre – DCD>Connect, DCD>Compute and DCD>Investment.

        DCD Connect

        Connecting the data center ecosystem to design, build & operate sustainable data centers for the AI age

        Bringing together more than 4,000 senior leaders working on Europe’s largest data center projects. DCD>Connect | London will drive industry collaboration, help you forge new partnerships and identify innovative solutions to your core challenges.

        “First class event that presented a wide variety of perspectives and technologies in an engaging and informative forum” – Data Center Project Architect, AWS

        DCD Compute

        Uniting enterprise and hyperscale leaders driving scalable AI Infrastructure from silicon to software…

        New workloads are fundamentally reshaping IT infrastructure, as accelerated hardware innovation is enabling more new workloads. How can you keep up in this rapid cycle of new AI models, new hardware, new software, and the race to be first to market?

        The Compute event series, run in partnership with SDxCentral, empowers leaders to make sharp decisions on IT infrastructure and AI deployment. Join 400+ peers from enterprise, hyperscale, and top IT infrastructure and architecture innovators to shape the future of compute—on-prem or in the cloud.

        • 400+ Decision-Makers for IT Infrastructure, Architecture, AI, HPC and Quantum Computing
        • 60+ industry-leading speakers at the forefront of innovation across cloud and on-prem compute
        • Hosted in partnership with SDxCentral

        DCD Investment

        Connecting senior dealmakers driving the economic evolution of digital infrastructure…

        The world depends on digital infrastructure, and there’s never been more pressure on the industry to scale at speed. The Data Center Dynamics Investment series helps the leading dealmakers behind this growth to make informed decisions faster, through top-tier content, tailored networking, and best-practice sharing.

        • Dynamic Programme: A brand new format including leadership roundtable discussions allows for 2025 attendees craft their own agenda at the Forum.
        • 50 Speakers: The C-suite operators, leading investors, and advisors in data centers are converging to strategize on the industry’s evolving landscape.
        • Exclusive Networking Opportunities: The Investment Forum is separated from the main DCD Connect programme and show floor, offering private networking and dealmaking opportunities to take place in an optimal setting.

        Secure your pass for three-colocated events September 16-17 – DCD>Connect, DCD>Compute and DCD>Investment.

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Events
        • Fintech & Insurtech

        This month’s cover star, Dr. Noxolo Kubheka-Dlamini – Chief Digital and Information Officer at Telkom Consumer & Small Business, speaks to the process of leading an ongoing digital transformation

        Welcome to the latest issue of Interface magazine!

        Click here to read the latest edition!

        Telkom: More Than a Telco

        Our cover star talks us through the process of leading an ongoing digital transformation that is pragmatic, strategic and embedded in business goals at South Africa’s largest telecommunications platform provider. “By the time we entered the mobile space in 2010, the market was already saturated,” explains Dr. Noxolo Kubheka-Dlamini, Chief Digital & Information Officer at Telkom Consumer & Small Business. “Our ambitions were constrained by limited capital, inherited legacy systems, regulatory shackles, and the sheer inertia of being a former state-run monopoly.” However, Telkom’s “willpower and commitment never faded” resulting in “notable and consistent performance against all odds”. Today, Telkom is playing a pivotal role in ensuring access to meaningful connectivity, driven by the company’s vision to become South Africa’s digital backbone: bridging the digital divide and enabling inclusive participation in its digital economy.

        Kynegos: Shining a Spotlight on Transformation, Innovation and Sustainability

        Kynegos, a spin-off from Capital Energy, is a business built on strategy. It exists to develop technological solutions for strategic industries. Capital Energy needed an independent platform that could scale digital solutions beyond the energy sector, and foster collaboration with startups and technology centres. Kynegos has filled this gap, and is being leveraged to create co-innovation ecosystems. This allows Capital Energy to develop digital tools that address current and future industrial challenges, keeping the company’s finger on the pulse. We spoke to CEO Victor Gimeno Granda, about its backstory, its values, and the road ahead. “Not only do we develop digital assets for the renewable sector, but for green data centres as well. My perspective is that sustainability is going to be more relevant than ever in the next 18 months.”

        York County: The Human Side of AI

        York County’s IT team has spent the past decade redefining what local government tech can and should be. From pioneering community cybersecurity workshops to forging statewide collaboration through ValGITE, the county has systematically brought innovation into its operations. This broad portfolio of initiatives has strengthened infrastructure and elevated service delivery. And also earned York County the number one spot in the Digital Counties Survey for jurisdictions under 150,000 population.

        “Since I became deputy director eight years ago, this has been one of my goals,” reflects Tim Wyatt, director of information technology at York County. “And over the last eight years, we’ve been in the top 10, but we finally landed that number one place. I think it’s a great reflection for my team, the county, and all the dedication to try to do what’s right by the citizens. It’s just something I’m incredibly proud of. I think it accurately reflects the hard work of my team.”

        Wade Trim: Bridging the Cybersecurity Skills Gap

        Wade Trim provides consulting engineering, planning, surveying, landscape architecture and environmental science services to meet the infrastructure needs of government and private corporations. With a cybersecurity skills gap leaving vacancies unfilled, Wade Trim’s Senior Manager of Information Security, Eric Miller, spoke with Interface about how stepping away from education-focused rigidity could unlock swathes of latent talent. “Our industry puts emphasis on certifications. However, being passed over for jobs because you don’t have a particular certification or degree in favour of someone fresh out of college has shown me that the best candidates are those that can tell me their story. What brings them to this point in their career? Tell me what qualifies you for this role. That’s how I interview.”

        York Catholic District School Board: York Catholic District School Board: Community and Communication at the Heart of IT Strategy

        The challenges facing an IT leader in 2025 call for a new kind of approach. One that favours partnerships over transactions, collaboration over competition, and centres people rather than technology for technology’s sake. These perspectives ring especially true in an organisation like the York Catholic District School Board (YCDSB). It emphasises values like “service, community, collaboration, and fait rather than academic excellence alone,” explains Scott Morrow, YCDSB’s Chief Information Officer (CIO). “It’s not actually about the technology; it’s about enablement.”

        We spoke with Morrow to learn more about his approach to IT leadership. From building and maintaining a team amid the IT talent crisis, to driving digital transformation initiatives across the organisation. And broader strategic objectives across a changing technology landscape increasingly defined by cybersecurity and the rise of AI.   

        Click here to read the latest edition!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • People & Culture

        Deepak Parameswaran, Sector Head – Energy, Manufacturing & Resources at Wipro, talks innovation with National Grid’s Global Head of Data Strategy Andrew Burns

        Partners for over 25 years, Wipro and National Grid have been laying the foundation for progress… By taking data to the cloud, creating value and leveraging their common work to deliver advanced, data-driven innovations across the National Grid enterprise.

        Meeting the transformation challenge

        As a utility, National Grid seeks to provide safe, affordable, and reliable electric and natural gas service for its customers. As such, the company is hyper-focused on natural gas, electricity grid modernisation, customer satisfaction and the integration of business and technology processes across the entire business as gas and electricity demand increases across the markets. Wipro offers actionable solutions, providing the innovative technology and domain expertise necessary for organisations like National Grid to transform and become leaders in sustainability within their respective industries.

        Delivering bespoke solutions for Innovation

        Traditional utility technologies can pose challenges in terms of complexity and capital investment. With Cloud and AI technologies emerging as game changers, Wipro delivers a proven ecosystem, incorporating analytics, IoT, Generative AI, and Augmented Reality, tailored to meet the needs of customers, assets, and grid management. This makes for easier, scalable, and faster to market solutions that allow National Grid to quickly realise the benefits.
        Wipro’s Utility Enterprise solutions have delivered on key elements of the digital transformation journey at National Grid. This allows for a constant data presence across the globe, creating a common, secure cloud environment.

        Wipro’s partnership with National Grid

        Wipro’s collaboration with National Grid continues to be built on a foundation of continuous innovation, with a commitment to:

        • Staying ahead of utility business trends
        • Supporting National Grid’s clean energy transition
        • Developing sophisticated data and AI solutions for enhanced customer service
        • Maintaining agility to address emerging challenges

        “Wipro has been our biggest partner in executing use cases through the Innovation Lab, enabling us to be agile and deliver multiple projects with direct, tangible business benefits. Their support has been vital in ensuring a clear, efficient process and rapid execution, making them key to our success.”

        Andrew Burns, Global Head of Data Strategy, National Grid

        Click here to read more about National Grid’s Innovation story

        • Data & AI
        • Digital Strategy
        • People & Culture

        Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

        Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

        Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

        Discover tomorrow’s tech today

        Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

        Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

        The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

        GLOBAL INSPIRATION, LOCAL IMPACT

        Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

        Where the future of technology takes the stage

        Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

        On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

        If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

        Register for FREE

        Register for your Ticket

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Event Newsroom
        • Infrastructure & Cloud

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open…

        February’s cover story spotlights a customer-centric vision and a culture of innovation putting NatWest at the heart of the Open Banking revolution

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        NatWest: Banking open for all

        Head of Group Payment Strategy, Lee McNabb, explains how a customer-centric vision, allied with a culture of innovation, is positioning NatWest at the heart of UK plc’s Open Banking revolution: “The market we live in is largely digital, but we have to be where customers are and meet their needs where they want them to be met. That could be in physical locations, through our app, or that could be leveraging the data we have to give them better bespoke insights. The important thing is balance… At NatWest, we’ll keep pushing the envelope on payments for a clear view of the bigger picture with banking that’s open for everyone.”

        EBRD: People, Purpose & Technology

        We speak with the European Bank for Reconstruction & Development’s Managing Director for Information Technology, Subhash Chandra Jose. With the help of Hexaware’s innovation, his team are delivering a transformation programme to support the bank’s global investment efforts: “The sweet spot for EBRD is a triangular union of purpose, people, and technology all coming together. This gives me energy to do something innovative every day to positively impact my team and our work for the organisation across our countries of operation. Ultimately, if we don’t get the technology basics right, we can’t best utilise the funds we have to make a real difference across the bank’s global efforts.”

        Begbies Traynor Group: A strategic approach to digital transformation

        We learn how Begbies Traynor Group is taking a strategic approach to digital transformation… Group CIO Andy Harper talks to Interface about building cultural consensus, innovation, addressing tech debt and scaling with AI: “My approach to IT leadership involves creating enough headroom to handle transformation while keeping the lights on.”

        University of Cinicinnati: Where innovation comes to life

        Bharath Prabhakaran, Chief Digital Officer and Vice President at the University of Cincinnati (UC), on technology, innovation and impact, and how a passion for education underpins his team’s work. “The foundation of any digital transformation in my opinion is people, process, technology – in that order,” he states. “People and culture are always the most challenging areas to evolve because you’re changing mindset and behaviour; process comes a close second as in most organisations people are wedded to legacy ways of working. In some respects, technology is the easy part, you always implement the tools but they’ll not be effective if you don’t have the right people and processes.”

        IT: A personal career retrospective

        It’s fascinating, looking back at something as complex and profoundly impactful as IT. And for Claudé Zamboni, who is preparing to retire after over 40 years in the sector, it’s been an incredible time to be deeply involved in technology. “There have been monumental changes from when I first entered IT, where it was basically a black box,” says Zamboni. “People didn’t know what the IT team was doing, and those in IT would just handle problems without telling anyone how. It only started to become more egalitarian when the internet got more pervasive. We realised that with information being available everywhere, we would lose the centralisation function of IT. But that was okay, because data is universal.”

        Read the latest issue here!

        • Cybersecurity
        • Data & AI
        • Digital Strategy
        • Fintech & Insurtech

        This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water…

        This month’s cover story throws the spotlight on the ground-up technology transformation journey at Lanes Group – a leading water and wastewater solutions and services provider in the UK.

        Welcome to the latest issue of Interface magazine!

        Read the latest issue here!

        Lanes Group: A Ground-Up Tech Transformation

        In a world driven by transformation, it’s rare a leader gets the opportunity to deliver organisational change in its purest form… Lanes Group – the leading water and wastewater solutions services provider – has started again from the ground up with IT Director Mo Dawood at the helm.

        “I’ve always focused on transformation,” he reflects. “Particularly around how we make things better, more efficient, or more effective for the business and its people. The end-user journey is crucial. So many times you see organisations thinking they can buy the best tech and systems, plug them in, and they’ve solved the problem. You have to understand the business, the technology side, and the people in equal measure. It’s core to any transformation.”

        Mo’s roadmap for transformation centred on four key areas: HR and payroll, management of the group’s vehicle fleet, migrating to a new ERP system, and health and safety. “People were first,” he comments. “Getting everyone on the same HR and payroll system would enable the HR department to transition, helping us have a greater understanding of where we were as a business and providing a single point of information for who we employ and how we need to grow.”

        Schneider Electric: End-to-End Supply Chain Cybersecurity

        Schneider Electric provides energy and digital automation and industrial IoT solutions for customers in homes, buildings, industries, and critical infrastructure. The company serves 16 critical sectors. It has a vast digital footprint spanning the globe, presenting a complex and ever-evolving risk landscape and attack surface. Cybersecurity, product security and data protection, and a robust and protected end-to-end supply chain for software, hardware, and firmware are fundamental to its business.

        “From a critical infrastructure perspective, one of the big challenges is that the defence posture of the base can vary,” says Cassie Crossley, VP, Supply Chain Security, Cybersecurity & Product Security Office.

        “We believe in something called ‘secure by operations’, which is similar to a cloud shared responsibility model. Nation state and malicious actors are looking for open and available devices on networks. Operational technology and systems that are not built with defence at the core and not normally intended to be internet facing. The fact these products are out there and not behind a DMZ network to add an extra layer of security presents a big risk. It essentially means companies are accidentally exposing their networks. To mitigate this we work with the Department of Energy, CISA, other global agencies, and Internet Service Providers (ISPs). Through our initiative we identify customers inadvertently doing this we inform them and provide information on the risk.”

        Persimmon Homes: Digital Innovation in Construction

        As an experienced FTSE100 Group CIO who has enabled transformation some of the UK’s largest organisations, Persimmon Homes‘ Paul Coby knows a thing or two about what it takes to be a successful CIO. Fifty things, to be precise. Like the importance of bridging the gap between technology and business priorities, and how all IT projects must be business projects. That IT is a team sport, that communication is essential to deliver meaningful change – and that people matter more than technology. And that if you’re not scared sometimes, you’re not really understanding what being the CIO is.

        “There’s no such thing as an IT strategy; instead, IT is an integral part of the business strategy”

        WCDSB: Empowering learning through technology innovation

        ‘Tech for good’, or ‘tech with purpose’. Both liberally used phrases across numerous industries and sectors today. But few purposes are greater than providing the tools, technology, and innovations essential for guiding children on their educational journey. Meanwhile, also supporting the many people who play a crucial role in helping learners along the way. Chris Demers and his IT Services Department team at the Waterloo Catholic District School Board (WCDSB) have the privilege of delivering on this kind of purpose day in, day out. A mission they neatly summarise as ‘empower, innovate, and foster success’. 

        “The Strategic Plan projects out five years across four areas,” Demers explains. “It addresses endpoint devices, connectivity and security as dictated by business and academic needs. We focus on infrastructure, bandwidth, backbone networks, wifi, security, network segmentation, firewall infrastructure, and cloud services. Process improvement includes areas like records retention, automated workflows, student data systems, parent portals, and administrative systems. We’re fully focused on staff development and support.”

        Read the latest issue here!

        • Data & AI
        • Digital Strategy
        • People & Culture