Lee Fredricks, Director – Solutions Consulting, EMEA at PagerDuty, on why technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability

Technology leaders should see 2026 as a time for operational resilience to shift from ambition to accountability. In 2025, too many cloud services outages and disruptions took place across the public and private sectors, and now regulatory, technological and cultural pressures are converging to say that enough is enough.

Outages often translate into broader repercussions for the organisation, including revenue impact, customer churn, share price pressure and potentially regulatory reporting obligations. Operational metrics must now be discussed alongside financial KPIs at the board level. C-suite leaders understand accountability, especially within the very regulated financial sector.

DORA’s First Birthday

It’s now been one year since the implementation of the Digital Operational Resilience Act, or DORA, introduced by the EU to strengthen the digital resilience of financial institutions. By now, organisations have had time to consider moving from mere compliance to creating a competitive edge from their investments.

Enterprise tech leaders are in the middle of a balancing act. They’re managing ongoing modernisation and transformation initiatives while navigating multi-jurisdictional regulatory scrutiny. At the same time, they face constant pressure from the board and must meet evolving customer needs—all competing for immediate attention. The stakes have never been higher. Operations teams are no longer viewed as a back-office IT function. Their success in keeping the organisation running and driving revenue is now a board-level concern.

For organisations today, IT is business delivery.

A year of DORA has seen organisations make the shift from focusing solely on mere compliance to setting meaningful demonstrable testing, third-party risk visibility and strictly mandated incident reporting timelines. Financial firms have lessened their exposure to risky situations. Payments providers aren’t only reliant on a single cloud region or SaaS supplier, or unable to provide evidence of real time incident response efforts and auditable logs after a disruption.

One benefit of these overall systemic improvements is enhanced supply chain accountability. Financial institutions and their technology partners are both liable for potential penalties and reputational risk, which makes it highly critical that they can prove their resilience capabilities.

Nevertheless, operational resilience is a continuous discipline. A fragmented incident response can expose firms to regulatory and reputational risk again and again if not addressed systemically. As such, many organisations are looking toward AI agents as part of a move towards ‘no-touch’ operations.

From Autonomy to Self-Healing

Under set policies, autonomous agents can handle incident response and operational tasks, such as detection, triage and remediation. AI agents deployed in operations may become the backbone of L1 (first contact) and L2 (more skilled) support. Contrast this with the traditional, reactive, ticket-driven model of IT. The industry can move much faster and with a higher successful close rate. Leveraging intelligent automation reduces mean time to detection/resolution and KPIs around lower incident volumes reaching L3. Additionally, it can lead to improved service availability percentages. Well integrated agents that actually support existing operations teams also help manage the issues around talent shortages faced by many organisations.

A typical incident lifecycle with agentic processes includes several stages depending on the model, but can be summarised as: Anomaly detected, correlated with recent deployment, a remediation script triggered and a human notified if set thresholds were breached. Such no-touch operations are golden in any sector, but particularly with industries such as digital banking and retail, where peak traffic periods demand near-instant response and poor customer experience is a powerful motivator for users to instantly change providers.

IT Standardisation

In addition, consider standardisation as part of strategic infrastructure best practices. There is a role for central operations clouds and operational ‘golden paths’ as solid foundations for reliable operational scale and dependability. Standardisation enables consistent, scalable operational excellence especially across large, distributed enterprises. ‘There is one way and it is the right way’ can be a great time and stress saver for operational teams – particularly if a regulatory notification and clear evidence is required.

For example, a global bank might define a single golden path for deploying customer-facing applications with pre-approved monitoring, incident response workflows, and regulatory reporting templates built in. In an outage, teams follow the same process and automatically capture the evidence required for regulators, avoiding confusion, delays, and compliance risk.

All of these possibilities take us to an exciting new place for an evolved set of developer and operational roles. When organisations enable AI to reshape daily engineering work away from manual firefighting and low-value work it frees headspace and time for developers and engineers to move into more architectural thinking and intelligent oversight of automated systems. These augmented teams will be empowered to manage simple situations instantly and devote more time and attention to the more difficult issues – the edge cases and the strategic necessities.

Enabling Agentic AI

Using another lens, businesses with agentic IT operations capabilities support their current talent, extending their reach and the speed of their response. The winning organisations will be those who deploy agents strategically, freeing up humans for that higher-value work – i.e. L3 expert support – and setting new standards for operational excellence that customers can rely on. Ideally this means making commensurate investment in existing people, training and organisational change management. A culture of continual upskilling and forecasting that points humans to where they make the best impact will be just as important as the autonomous tech tools working alongside them.

Autonomous agents allow many new services, and one of those can be described as self-healing operations. This evolution of the operations world is where predictive detection, automated remediation and embedded resilience all coalesce. With an autonomous process of testing, maintenance and remediation, organisations can focus on finely measuring improved customer trust. They can also enjoy the productivity and revenue benefits of high business continuity and availability.

AI is still a new technology, and many are legitimately concerned with the concept of autonomous agents. There is a need for clear guardrails, audit trails and explainability in automated remediation, and many technology partners have invested in their ability to support across these areas. Moreover, firms must maintain direction with policy-driven automation rather than uncontrolled autonomy, particularly in regulated industries.

Mandate Operational Excellence

This year is very likely to reward organisations that treat operational resilience as core to their business strategy. Those investing in automation, standardisation and governance will set the pace for their industries in an AI-enabled and increasingly autonomous world.

Regulators are already expanding their scrutiny and reliability expectations beyond financial services firms. Across the world, jurisdictions are increasingly looking to strengthen their economies and digital services in particular through resilience and cybersecurity measures. At the same time, agentic operations, and the organisational performance benefits they support, will rapidly become table stakes technology in all sectors. Inevitably, customers will judge brands on digital reliability as much as price or product features when evidence of outages are a click or a headline search away.

Start now. Audit internal incident response maturity, review the potentially complex web of third-party IT dependencies and identify where automation makes clear business sense. While resilience is an investment in compliance, it is also critical to ensure customer trust and future stability.

Learn more at pagerduty.com

  • Artificial Intelligence in FinTech
  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud

With growth in data centre power demand, driven by AI and other power-hungry applications, could microgrids hold the key? Rolf Bienert, Technical & Managing Director of global industry body, the OpenADR Alliance discusses the potential for microgrids in providing flexibility and clean energy


Generating enough power for the demands of artificial intelligence (AI), cryptocurrency and other power-hungry applications, is one of the biggest challenges facing data centres right now. With a power grid already under pressure and in the process of trying to modernise and flex to cope with the huge demands placed on it, the industry needs to rethink the way it adapts to these challenges.

Data Centres

According to figures from the International Energy Agency (IEA), data centres today account for around 1% of global electricity consumption. But this is changing with the growth in large hyperscale data centres with power demands of 100 MW or more. And an annual electricity consumption equivalent to the electricity demand from around 350,000 to 400,000 electric vehicles.

With the rise of AI and expectation of what it can deliver, the next few years are likely to see a significant rise in the number and size of data centres. This has serious consequences for the energy sector. While, technology firms are under growing pressure to make data centres more sustainable.

Microgrids – The Opportunities

Microgrids could be the answer in providing a more sustainable and efficient energy supply for data centres. While the concept of a microgrid can vary depending on how they are used, they can be defined as small-scale, localised electrical grids that can operate independently or in conjunction with the main power grid. They can range in size from a university to a single home.As a global ecosystem, we’re seeing them used in different scenarios, from residential to large campuses.One interesting use case is MCE, a California Community Choice Aggregator, which has established a standardised setup for residential virtual powers plants (VPPs) with OpenADR used as the utility connection to manage the prices and consumption.

The feasibility and suitability of microgrids depends on factors like the specific requirements of the data centre, regulatory environment and the long-term goals for sustainability, resilience and cost-efficiency.

The real value is in helping overcome grid constraints and improving reliability by managing consumption and maintaining power during grid issues. For data centres that require uninterrupted operation, this ability to deliver resilience is critical.

Sustainability is another important advantage. By integrating renewable energy sources, such as solar or wind power, and energy storage, microgrids can significantly reduce carbon footprint. While in terms of cost savings, they can reduce operational costs by utilising local power generation and demand-response strategies.

Microgrids are modular, which means they can grow as the data centre’s needs evolve. Plus, when it comes to regulation, they face fewer regulatory hurdles compared to other options, like nuclear power, because they can operate mostly ‘net zero’ on the grid connection.

Microgrids – The Challenges

For data centre operators and investors trying to address power supply and stability issues, the use of microgrids can also mean challenges.The first of these is the start-up costs. While we talk about a reduction in operational costs once up and running, set-up costs for microgrids can be high, requiring significant capital investment especially for larger data centres, so important to bear in mind.

Sustainability may be a big plus point, but the use of renewables like solar and wind depend on the weather – and the weather can be fickle. This necessitates robust storage solutions, backup power or large grid connections to ensure reliability and stability at all times. It’s also important to stress that the effective integration of these various distributed energy sources and systems can be technically challenging, so working with good integrators and partners is paramount.

When it comes to powering data centres, microgrids are not the only option being considered. Alternatives like small modular nuclear reactors (SMRs) are also be touted as potential power sources. In my mind, SMRs are not in competition with microgrids but could become an important baseline component of them.

In their favour, SMRs provide a constant, high-capacity output, ideal for 24/7 operation, and a zero-emissions power source. Once operational, they offer stable costs over decades. But they also face challenges like stringent regulation and public opposition to development, while a nuclear plant, even a small-scale one, involves substantial upfront investment. This is aside from the risks around nuclear waste and safety.

Bottom line is that the data centres are going to need a very high continuous supply of power and microgrids offer options for a more resilient and responsive energy infrastructure. Decentralised power through a network of microgrids could help dynamically manage power loads and optimise renewable energy sources – especially as demands on the grid grow as we march onwards towards an AI-powered future.

Learn more at openadr.org

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Welcome to the latest issue of Interface magazine! Click here to read the latest edition! Sanofi: Supporting the World’s Health…

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Sanofi: Supporting the World’s Health Through Data

This month’s cover story spotlights Sanofi, one of the world’s largest pharmaceutical companies. For an organisation that puts the end-user – the patient – first, this requires an unwavering focus on R&D and continuous improvement. For the sake of the world’s health; every patient counts. So, when opportunities arose to improve services through data and advanced technology like AI, Sanofi brought in experts to steer and develop the journey.

Snehal Patel, Head of Global Data and AI Platform, takes a deep dive with Interface… “These innovations have fundamentally transformed Sanofi’s data and AI value chain,” says Patel. “It’s enabled scalable and efficient development across the organisation. We now have a far more agile development environment that supports the broader AI initiatives at Sanofi.”

Langham Hospitality Group: Cybersecurity Underpinning Guest Excellence

Anson Cho, Director of Information Security & Data Protection at Langham Hospitality Group, discusses the pandemic’s silver lining and the development of a proprietary matrix to embed security into the heart of operational excellence.

“Our strategy wasn’t about over-engineering our systems to match the spend of a global financial institution; it was about increasing our defensive maturity so we are never an easy mark,” says Cho. “In cybersecurity, you want to ensure your barriers are sophisticated enough that attackers move on. We focus on staying ahead of the curve and continuously evolving so that our security posture remains a formidable deterrent.”

FNB: Redefining Data Science in Commercial Banking

Yudhvir Seetharam, Chief Analytics Officer at South Africa’s First National Bank (FNB) on a data science journey characterised by curiosity, culture and the drive for a competitive edge.

“Ours is a holistic approach focusing on the customer,” he explains. “Understanding the context of each customer journey and then using that context so that when we interact with you, we’re able to drive the right conversation with the right customer, at the right time, through the right channel and for the right reason. These ‘five rights’ make our interactions with clients more impactful than a spray and pray approach.”

Click here to read the latest edition!

  • Cybersecurity in FinTech
  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud

Ian Franklyn, Chief Revenue Officer at Mainstreaming, on why delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy

Streaming video has firmly established itself as the dominant force shaping global internet traffic. From premium live sports and breaking news to on-demand entertainment libraries, audiences now expect seamless, high-quality viewing experiences on any device, at any time. For leaders across media, telecoms, and technology, the challenge is no longer about enabling streaming. It is about sustaining it at scale preserving reliability, efficiency and profitability.

Yet, despite the central role video plays in today’s digital economy, the underlying delivery model remains fundamentally fragmented.

Many broadcasters and OTT platforms still rely heavily on centralised, third-party content delivery networks (CDNs). These operate largely outside internet service provider (ISP) infrastructures. This model has supported the growth of streaming over the past decade. However, it is increasingly misaligned with current demand patterns, especially during large-scale live events.

The result is a structural inefficiency that affects every stakeholder in the ecosystem. And the industry can no longer ignore it.

The Growing Cost of Disconnection

When millions of viewers tune in simultaneously, vast volumes of video data must travel across multiple interconnected networks before reaching end users. This often means duplicating the same streams across long-haul routes, placing unnecessary strain on transit links and core infrastructure.

For ISPs, this translates into rising traffic volumes without proportional financial return. Networks become congested, costs increase, and visibility into traffic flows remains limited.

Broadcasters and OTT platforms face a different but equally critical challenge. With limited control over last-mile delivery, performance becomes unpredictable at precisely the moments that matter most. Buffering, latency, and degraded video quality directly impact user experience, driving churn and damaging brand reputation.

Ultimately, the end user bears all the consequences. Even minor disruptions during peak events can cause frustration and dissatisfaction. This consequently erodes trust, impacting both service providers and content owners in an increasingly competitive market.

Rethinking Delivery: Moving Closer to the Edge

Addressing these challenges requires a fundamental rethink of where and how video is delivered.

Rather than relying solely on centralised infrastructure, delivery capacity can be deployed directly within ISP networks, closer to the end user. This edge-based approach localises traffic, reducing the distance data must travel and fundamentally improving efficiency.

The benefits are immediate. By placing content within ISP networks, duplicated traffic across transit routes is minimised, congestion in core networks decreases, and latency is reduced. At the same time, both ISPs and content providers gain greater visibility and control over performance.

This model is particularly valuable for live streaming, where demand is highly concentrated and unpredictable. Traditional CDN architectures, designed for distributed but relatively predictable traffic patterns, are simply not built to handle sudden spikes in concurrent viewership.

Edge delivery networks purpose-built for video, by contrast, enable capacity to be positioned dynamically where it is needed most. This ensures that even the largest live events can be delivered with consistency, reliability, and low latency.

From Delivery Burden to Shared Value Creation

The evolution toward edge-based video delivery represents a fundamental shift for both ISPs, and broadcasters and OTT platforms.

For ISPs, streaming has long been treated as a cost centre. A growing source of bandwidth consumption that drives infrastructure investment without directly contributing to revenue. As traffic volumes continue to rise, this model becomes increasingly unsustainable both economically and operationally.

At the same time, broadcasters face a different challenge. How can they efficiently manage highly variable demand? Particularly during large-scale live events where audience peaks are both massive and unpredictable. And where failure is not an option.

Embedding video delivery capabilities within ISP networks changes this dynamic for both sides.

For ISPs, localising traffic reduces reliance on upstream transit. This alleviates pressure on core infrastructure, enabling more efficient use of existing capacity. It also opens new monetisation opportunities, allowing them to move beyond being passive carriers and play an active role in delivering premium streaming experiences.

For broadcasters and OTT platforms, the benefits are equally strategic. Edge-based delivery enables them to scale live events more efficiently. Activating capacity where and when it is needed rather than overprovisioning for peak demand. This results in more predictable performance, consistent quality of experience, and improved cost efficiency.

In this shared model, video delivery is no longer a burden for one side or a risk for the other. It becomes a coordinated effort, aligning incentives and generating value for all the stakeholders involved.

An Ecosystem that Works in Synergy

Realising this opportunity requires more than technology. It demands a shift toward a more collaborative operating model: a true ‘Better Together’ approach.

This means deeper alignment across the ecosystem, bringing together ISPs, broadcasters, OTT platforms, and technology providers around shared objectives. Instead of operating in silos, each stakeholder contributes to a unified delivery framework designed to meet the demands of modern streaming.

In practical terms, this approach increases transparency, improves performance, and aligns both technical and commercial incentives. Integrating delivery capacity within ISP networks creates a stronger foundation for long-term growth, enabling more efficient scaling as demand continues to rise.

The result is a more resilient and adaptable ecosystem. One capable of supporting increasingly complex and large-scale streaming experiences, and responding dynamically to future demand.

Building the Next Generation of Streaming Infrastructure

The misalignment between how video is consumed and how it is delivered is no longer sustainable, and delaying a change will only amplify the problem

As streaming evolves, new formats such as ultra-high-definition video and low-latency interactive services will place even greater demands on network infrastructure. At the same time, audience expectations will continue to rise, leaving little tolerance for disruption.

Meeting these challenges requires a shift toward integrated, edge-driven architectures supported by strong ecosystem partnerships.

By bringing video delivery closer to the viewer, the industry has an opportunity to redefine both the economics and performance of streaming. More importantly, it can move beyond the limitations of fragmented models toward a more efficient and scalable future. Ultimately, delivering exceptional streaming experiences won’t require just technology, but also collaboration and synergy, aligning the entire ecosystem to operate as one.

Learn more at mainstreaming.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Richard Ford, Chief Technology Officer at Integrity360, on why cybersecurity must move beyond control and embrace trust

Cybersecurity has long been focused on building walls, but the biggest threat is already inside. Today, insider risk accounts for nearly half of all data breaches. This isn’t just about malicious actors, it’s about regular employees and trusted contractors who make simple, costly mistakes.

Remote and hybrid working has only intensified the problem. With teams distributed and work happening across cloud platforms and collaboration tools, it’s harder than ever to track what’s happening, let alone why. Although AI tools promise efficiency, they also introduce new vulnerabilities. Employees pasting code into chatbots or bypassing corporate tools to meet deadlines. All seemingly innocent, but highly risky.

Insider Risk

Ransomware gangs know this and are now skipping the technical breach altogether and going straight to the source – a company’s insiders. Whether through bribery or social engineering, attackers are finding that humans can be the weakest link in even the most well-defended environments. Despite this, most security budgets still focus outward.

Traditional tools like data loss prevention (DLP) struggle to keep up with today’s dynamic and unpredictable user behaviour. Meanwhile, simulated phishing tests and punitive training schemes often breed resentment, not resilience. It’s time to rethink the model.

Human Error, Human Fix

We need to stop treating employees as the problem and start making them part of the solution. Enter Human Risk Management (HRM), a behavioural approach to cybersecurity that recognises the complexity of modern work. HRM tools monitor real-world user behaviour, detect anomalies in context, and deliver just-in-time nudges to prevent risky actions before they happen. Instead of punishing mistakes, they help users avoid them in the first place.

Of course, technology alone won’t fix the issue, culture is key. Leadership must champion security as a shared responsibility, not an IT rulebook. Success should be measured by how quickly employees improve, not how often they slip up. Awareness campaigns need to be practical and rooted in real-world behaviour.

Organisations also need to understand how digital transformation has changed the risk landscape. Shadow IT is no longer a fringe issue, it’s how work gets done. Whether it’s a developer using an AI plugin or a marketer sharing files via a personal drive, employees will always find the fastest path to productivity. Security must meet them there, not block the way.

Cybersecurity Built on Trust

The smartest businesses are those that treat identity like infrastructure, and behaviour like a vital data stream. They invest in tools that adapt to people, not the other way around. This means a move away from a surveillance approach and embracing the nuance of human error and design systems that support.

In a world where threats are increasingly internal and AI is both a risk and a tool, cybersecurity can no longer be about control. It must be about trust, and that starts with understanding the humans behind the keyboards.

Learn more at integrity360.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Infrastructure & Cloud

Pierre Noel, Field Chief Information Security Officer at Expel, on why security with community-based governance is a key business pillar that better positions organisations to become more resilient and target growth

It’s been a particularly rocky start to 2026 for the global cybersecurity landscape. From the Substack data breach to PayPal credential-stuffing attacks in February, we are not looking at IT failures alone. These attacks are balance-sheet events: direct assaults on business value, triggering remediation costs and long-term impacts on financial health. Compounded with the conflict with Iran, leading to potential ramifications in the cyber realm, it’s more important than ever for the C-suite to be aligned on cybersecurity priorities.

Despite this, a glaring disconnect remains in planning and execution. Expel’s research found that while 85% of finance leaders view cybersecurity as a key component of business planning, only 40% express full confidence in security’s ability to align with business strategy. To bridge this gap, CISOs must move from reporting on activity and start reporting on resilience and unit cost.

Translating Alert Volume Into Unit Cost

CISOs must change how they present the value of their operations. CFOs are largely indifferent to technical metrics like the ‘millions of blocks pings’ or ‘SOC alert volume’ – to a finance leader, an alert is simply another form of disruption to daily operations.

To fix this, CISOs should introduce the ‘unit of cost protection’. By breaking down security spend into the cost required for a single transaction or business unit, CFOs can understand and manage it from experience. A tiered approach works best here: high-risk business units justify higher protection costs than low-risk ones. This allows CFOs to treat security as a scalable operational expense rather than a black hole of additional tooling – the kind of framing that also resonates in a boardroom.

Mapping Investment to Business Risk Exposure

Expel’s research shows that while 43% of finance decision-makers are confident that security can prioritise investments based on risk, only 46% are confident that security can deliver cost-efficient solutions. To move in the right direction, CISOs should shift from ‘vulnerability management’ to thinking about ‘business risk exposure’, requiring a different view of how threats unfold over time.

It’s all about asking the right questions. Instead of requesting more firewalls to protect a specific timeframe, start asking for the cost of securing diverse digital ecosystems across an extended risk window. The 2026 Winter Olympics is a good example: Russian-led cyber campaigns began raising concerns months before a single athlete arrived in Italy, proving that risk isn’t a one-day event but an ongoing operational cost.

For European organisations, this framing is increasingly non-negotiable. While NIS2 and DORA help make the cost of under-investment concrete and quantifiable, the upcoming Cyber Resilience Act (CRA), with key reporting requirements starting in September 2026, extends this pressure to anyone manufacturing or selling digital products in the EU. Even for purely domestic UK entities, the new UK Cyber Security and Resilience Bill is moving the goalposts toward these same high standards. Ultimately, CFOs must understand that cybersecurity isn’t just about preventing loss; it’s a prerequisite for safe and secure growth.

The Reputational Multiplier

So those are the questions to ask, but how do CISOs deal with the ‘unknown unknowns’, specifically long-term brand damage? While compliance fines under NIS2 or DORA may be straightforward (and important) to model, they rarely represent the full scope of the potential damage. In such scenarios, CISOs should propose a reputation multiplier: a framework for quantifying the financial fallout of brand damage in a language CFOs know and trust, looking past immediate recovery costs to factor in the long-term implications of re-establishing market trust.

The 2026 CarGurus breach illustrates this well. Impacting 12 million users, the cost wasn’t purely technical; it also came from the stock price dip and marketing spend required to repair the brand. For UK companies, where regulatory scrutiny is heightened, that multiplier effect is even more pronounced. This is the language of a CFO, and it helps CISOs better translate the urgency and relevance of a strong cybersecurity posture.

Standardising the Language of ROI

Closing the gap between CFOs and CISOs needs more than just better data; it needs a shared vocabulary. By standardising the language of ROI, CISOs transform cybersecurity from a vague insurance policy into a transparent value driver fully trusted by finance teams. Move away from complicated defensive jargon toward a unified framework of unit costs, and the gap between the CISO and CFO starts to close.

Security has become a key pillar of business operations, and in the current threat environment, it’s genuinely a community-based governance issue. The organisations that get this right aren’t just more resilient. They’re better positioned to grow.

Learn more at expel.com

  • Cybersecurity
  • Cybersecurity in FinTech
  • Digital Strategy
  • Infrastructure & Cloud

Chris Larsen, Chief Technical Officer – atNorth, on shaping ecosystems that support both digital progress and the preservation of our natural environment for future generations

The AI industry continues to grow seemingly exponentially. With 92% of companies planning to increase their AI investments in the next three years, demand for the high density digital infrastructure required to support these types of workloads is unsurprisingly at an all time high.

Data centres have always needed a significant amount of electricity to power and cool their computer equipment. Yet the sheer quantity of data to be processed for AI and other high performance computing – such as financial trading calculations and simulation technologies – necessitates a colossal amount of energy. For example, a report from the International Energy Agency states that data centres will use 945 terawatt-hours (TWh) in 2030, roughly equivalent to the current annual electricity consumption of Japan.

At the same time, there is growing pressure for all organisations to comply with ESG frameworks. The introduction of regulations such as the EU’s Corporate Sustainability Reporting Directive (CSRD), mandates the publication of carbon footprint disclosures. This leaves many businesses with a difficult conundrum to solve – how to balance digital advancement whilst mitigating environmental impact?

Once a consideration for local IT teams, the choice of a data centre partner is now at the forefront of balancing these two critical trends and is beginning to garner boardroom attention.

Data centres that are designed with environmental responsibility and community integration in mind can act as the central hub of a thriving society, an ‘ecosystem’ that supports long-term sustainability and regional economic development.

Location and Design

Where a data centre is built, and how, is fundamental to its efficiency and sustainability. AI-ready facilities often require rapid scaling in line with customer demand. Access to ample suitable land is essential. Modular designs allow for faster builds and easier adaptation to new innovations in cooling and hardware technologies,

Power and connectivity are also critical. Many regions struggle to offer the necessary renewable energy and high-speed network capacity. In contrast, the Nordics provide an ideal environment. An abundance of renewable energy, a cool natural climate that enables more energy efficient cooling techniques and excellent connectivity.

As a result, the presence of data centres can promote local investment in power, connectivity and electrical infrastructure that benefits the whole community. For example, atNorth’s ICE03 data centre in Akureyri, Iceland, facilitated the development of a new point of presence (PoP) for Farice, which operates submarine cables linking Iceland to mainland Europe. This enhances telecom reliability and strengthens digital infrastructure across the region.

Data centres can also support the stability of local power through grid balancing services. Something that is integral to the future design of atNorth’s data centres.

Decarbonisation and Circular Partnerships

Data centres are incredibly energy-intensive, and so many operators are investing in ways to reduce their carbon footprint. These include utilising the most efficient infrastructure and cooling technologies.

atNorth goes one step further and has committed to sourcing heat reuse partnerships for all of its new data centre campuses. This means that waste heat generated during the infrastructure cooling processes can be captured and redirected to support nearby businesses and homes. In Finland, for example, a partnership has been formed with Kesko Corporation that will utilise waste heat from atNorth’s new FIN02 campus to heat a neighbouring branch of one of its stores.

These types of initiatives essentially enable data centres to act as a decarbonisation platform for their clients’ IT workloads, helping them meet environmental targets and reducing running costs too. Something that is a key differentiator for businesses such as atNorth client and partner, Nokia, that has complex technical requirements and stringent sustainability goals.

Responsible Operations

Beyond environmental responsibility, data centres can be a positive force in the communities in which they operate. They create skilled jobs, drive improvements in local infrastructure, and often spark growth in hospitality, retail, and leisure services. At atNorth, we prioritise hiring locally and actively support education, charitable, and community initiatives in the regions we operate.

Similarly, a care for the natural surroundings is pivotal to promoting a successful, data centre ecosystem integration. For example, atNorth has set aside part of its DEN02 site in Denmark for biodiversity efforts, installing insect monitors to track changes in insect abundance and diversity throughout the site’s development.

As digital demand continues to grow, so does the need for responsible and sustainable development. High-performance computing can, and should, advance without compromising environmental integrity. By partnering with data centres that prioritise environmental stewardship and social responsibility, we can help shape ecosystems that support both digital progress and the preservation of our natural environment for future generations.

Learn more at atnorth.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud
  • Sustainability Technology

Nicole Reader, Head of Technology Solutions & Delivery at The Bunker (part of the Cyberfort Group), on finding a measured path forward for the future of cloud

For more than two decades, UK organisations have embraced the cloud as the default model for digital growth. Hyperscale platforms have offered flexibility, speed and a route to innovation that would once have required years of capital investment. Cloud first became the business mantra. Cloud native became the ambition. Few stopped to ask what this meant for long term control. Today that question is becoming unavoidable.

Geopolitical relationships are shifting at pace. Trade tensions, regulatory divergence and new data access laws are reshaping the digital landscape as quickly as any technological change. At the same time, businesses are generating and storing more information than ever before. AI tools, collaboration platforms and SaaS applications are accelerating data creation at a rate that is testing infrastructures, supply chains and budgets alike.

In that context, many UK organisations are starting to ask a difficult question. When we moved to the cloud, did we quietly export more control over our data than we realised? The uncomfortable answer in many cases is yes.

The Assumption of Cloud Control

A significant proportion of UK businesses rely on global services, whether hyperscalers such as Amazon Web Services and Microsoft Azure or SaaS platforms headquartered overseas. These providers are sophisticated, resilient and often highly secure. However, their global footprint means that data is frequently stored, processed or managed beyond UK borders.

The challenge is that many boards assume that if data is accessible from the UK, or if a provider has a UK presence, it remains firmly under UK control. This assumption is often incorrect.

There is a crucial difference between data location and legal jurisdiction. Data residency refers to where data is physically stored. Data sovereignty refers to which who ultimately governs access to that data. Those two concepts are not interchangeable.

Legislation such as the US Cloud Act demonstrates why this matters. Under certain circumstances, US authorities can compel US headquartered providers to provide access to data, even if that data is stored outside the United States. The geographic location of a data centre does not automatically determine who can lawfully demand access.

Boards often conflate these terms, believing that selecting a UK service resolves sovereignty concerns. In reality, the corporate structure of the provider, contractual arrangements and cross border processing activities can all shape the legal framework that applies.

This is not an abstract legal debate. It is a question of operational control, regulatory exposure and risk appetite.

The Convenience Compromise

The rise of public cloud was driven by many compelling advantages. Flexibility, scalability and rapid deployment transformed how businesses launched products and expanded into new markets. For many organisations, the cost of building and maintaining their own infrastructure was prohibitive and the hyperscalers offered an attractive alternative at a great price.

However, that convenience came with trade-offs that were not always fully understood at the time. Cloud contracts can be complex. Consumption based pricing models include ingress and egress charges. Including API calls and a range of ancillary costs that can quickly exceed initial forecasts. It is not uncommon for organisations to reach the midpoint of their financial year and discover their cloud budget has already been used.

Meanwhile, operational design decisions made years ago may not have been stress tested against today’s regulatory expectations or geopolitical realities. Many mid-market IT teams have spent the past decade maintaining estates rather than redesigning them. In some cases, institutional knowledge has not kept pace with the evolution of cloud services and their associated risks.

The result is a landscape in which data has been distributed widely, often for operational reasons, but without a holistic understanding of the sovereignty implications.

Repatriation is Not a Silver Bullet

In response, there has been a growing push towards data return and sovereign cloud offerings. European initiatives are seeking to create regional alternatives to US dominated platforms. In the UK, there have been calls by government to expand domestic data centre capacity to retain greater control over national data assets.

The instinct is understandable, particularly for government, defence and heavily regulated sectors where sovereignty can become a non-negotiable requirement. However, it would be naïve to assume that bringing data back to the UK automatically makes it secure or resilient.

Local does not necessarily mean safe. High profile breaches over the past year have affected organisations across multiple jurisdictions, regardless of where their infrastructure is hosted. Security is not guaranteed by postcode.

There are also practical constraints. Data volumes are expanding rapidly, fuelled by AI workloads and increasing digitalisation. Hardware supply chains are under pressure, with significant demand driven by hyperscale AI investments. Price volatility is already evident, with some organisations seeing substantial cost increases within weeks.

Simply building more UK data centres does not eliminate capacity constraints or environmental considerations, particularly around power and cooling.

Furthermore, many businesses rely on global platforms to serve international customers and partners. A purely national approach can undermine interoperability and performance. For most organisations, the right answer will involve a hybrid strategy rather than wholesale repatriation.

From Technical Detail to Board Level Risk

What has changed is not simply the technology, but the level at which these decisions must be made.

Data sovereignty is no longer a technical footnote for the IT department. It is a board level risk issue. Directors must understand where critical data is stored, where it is processed and which legal regimes can assert authority over it. They must assess whether current arrangements align with the organisation’s risk appetite and regulatory obligations.

This is particularly acute in sectors such as financial services, healthcare and defence, where the sensitivity of data and the scrutiny of regulators are intensifying. For these organisations, sovereignty and security are intertwined. Compromises made for convenience or short-term cost savings can carry significant long-term consequences.

Security itself must be treated as a foundational approach rather than an add on. Too often, security controls are bolted on after operational decisions have been made. Minimum standards are implemented, arbitrary certificates are obtained and compliance boxes are ticked. While certifications can provide useful benchmarks, they do not replace rigorous design and ongoing validation.

If data is brought back onshore, but not properly segregated, monitored and protected, the sovereignty objective is completely undermined. There is little value in regaining geographic control if the underlying environment remains vulnerable.

The Business Case Reality

It would be unrealistic to ignore commercial pressures. For many mid-market organisations, cost remains a primary driver of decision making. Risk appetite is frequently calibrated against budget constraints. The perfect solution is rarely affordable.

That is why compromise becomes central. The critical question is not whether to compromise, but where. Does an organisation prioritise flexibility over jurisdictional control? Does it accept higher costs to secure local hosting? Does it rely on hyperscale security capabilities while accepting overseas governance frameworks?

There is no universal answer. The correct balance depends on the nature of the data, the regulatory environment and the strategic objectives of the business. A small retail operation will have different requirements from a growing fintech or a defence contractor. Supplier selection must reflect that risk profile. Not all cloud or data centre providers are equal in capability, assurance or sector expertise.

Boards should therefore ask their providers some direct questions. Where exactly is our data stored and where is it processed? Which legal jurisdictions apply, and under what circumstances could external authorities demand access? Who within your organisation has access to data, and how is it segregated from other customers? What is the exit plan, and how do we ensure data is fully returned and deleted at the end of a contract?

These are not confrontational questions. They are governance essentials.

A Measured Path Forward

As a result the UK should not retreat from global cloud ecosystems, nor should it blindly assume that everything must be deported. The objective is not isolation, but informed control.

Where sovereignty is genuinely critical, particularly in government and national security contexts, local hosting and specialist providers may be essential. In other scenarios, public cloud may remain the most effective platform, provided its legal and operational implications are fully understood and managed.

The most significant risk today is not that UK businesses have embraced the cloud. It is that many have done so without fully mapping the sovereignty, jurisdictional and security consequences that come with relinquishing control of data.

As data volumes grow and geopolitical uncertainty continues, that gap in understanding becomes a strategic vulnerability. The cloud has delivered extraordinary value. Now all these years later, it demands a more mature conversation.

Convenience built the digital economy. Control will define its resilience.

Learn more at thebunker.net

  • Cybersecurity
  • Digital Strategy
  • Infrastructure & Cloud

Leonardo Boscaro, EMEA Sales Leader at Nutanix Database, on why sovereignty requires repeatable, compliant database operations and recovery across hybrid multicloud environments

In conversations with customers, infrastructure leaders are being asked to deliver more control with the same people. Stronger compliance with less tolerance for error. And higher resilience in environments that are objectively more heterogeneous than they were even a few years ago. Expectations continue to rise, but the operating models used to run critical systems haven’t kept up.

This pressure shows up first at the database layer because they sit at the centre of mission-critical services. While still being managed through manual processes, fragmented tooling, and a heavy reliance on specialist knowledge. In many organisations, when availability, security and compliance are under scrutiny, this combination creates exposure very quickly.

Database-Dedicated Platforms

The shift we now see in regulated organisations is toward database-dedicated platforms. Where the operating model is standardised through approved templates, guardrails, automated workflows, and built-in auditability. In practice, this means treating database workloads as a dedicated domain, with infrastructure and lifecycle operations designed together rather than as an add-on to a general-purpose environment. This approach depends on having a standardised operational layer for database lifecycle management and recovery that works consistently across hybrid and multicloud environments.

And in regulated environments, what matters is not only being compliant, but also being able to demonstrate it repeatedly. When provisioning, patching, and recovery depend on tickets, tribal knowledge, and one-off scripts, controls become hard to test. Furthermore, audit trails are incomplete, and resilience turns into a matter of confidence rather than capability.

How Complexity Crept In

Most enterprise database estates grew through sensible decisions made at different points in time. A platform was added to meet a new requirement, a legacy system could not be moved, or a new tool solved a specific operational gap. Each step made sense in isolation. Over time, however, teams found themselves managing dozens or hundreds of databases across multiple engines and environments. Each with its own processes for provisioning, patching, recovery and monitoring.

What they face now is inefficiency and operational fragility. Databases are where control, auditability and resilience intersect. So, when processes are manual or inconsistent, the risk surface expands quickly. In regulated industries, this shows up in audit pressure, long recovery times and an uncomfortable dependency on a small number of specialists.

Why Databases Expose the Cracks First

Many infrastructure leaders we speak to ask why databases should be their concern at all. Traditionally, databases belonged to DBA teams, while infrastructure focused on platforms and capacity. Unfortunately, it’s not that simple anymore.

Today, infrastructure and security leaders are under constant pressure to improve compliance, reduce risk exposure and maintain availability with fewer people and less tolerance for error. Databases sit directly in that line of responsibility. Patching windows, backup failures or untested recovery plans are operational risks with business consequences.

What becomes clear very quickly is that automation alone does not solve this. Many organisations have invested heavily in scripts and bespoke workflows to manage database lifecycles. While these efforts reduce pressure in specific areas, they often create new complexity elsewhere. Particularly when people change roles or environments scale.

Standardisation, Not Scripting, is the Real Shift

The real breakthrough comes when organisations move from automating tasks to standardising the operating model itself. This means treating database operations as a productised capability, with approved templates, guardrails and repeatable workflows built in from the start.

When provisioning, patching, cloning, and recovery follow a consistent model, compliance becomes part of the process rather than something validated afterwards. Human error is reduced because the system guides operations rather than relying on memory or documentation. And audit readiness improves because actions are traceable and predictable.

This is why many organisations are moving away from bespoke automation and toward standardised operating models, where infrastructure, lifecycle, and governance are designed together. 

Recoverability Turns Theory Into Reality

Recoverability is the stage at which operating models are tested under pressure. Many organisations technically have disaster recovery in place, but testing it is complex, disruptive and often avoided altogether.

For mission-critical services, particularly in financial services or the public sector, this is not acceptable. Recovery needs to be a standard operational capability, not a specialist exercise dependent on a few experts and fragile runbooks.

By embedding recovery workflows into the same platform used for everyday database operations, testing becomes simpler and more frequent. Switchovers, failovers and restores can be executed through guided processes, with far less room for error. This is not about faster failover, but about confidence, credibility, and the ability to demonstrate control.

Sovereignty is Becoming Operational Autonomy

We all know how important sovereignty is, yet it’s often discussed in terms of data location instead of dependency and control, beyond just geography. Real sovereignty must factor in where the data resides, who ultimately controls the operating model and under which jurisdiction that control sits.

In this context, hybrid strategies work but only if they preserve consistency. Running databases across on-premise and cloud environments without a common operating model simply moves complexity from one place to another. True autonomy comes from having one set of standards, workflows and controls that travel with the workload, regardless of where it runs.

Our customers want the freedom to adapt to regulatory, geopolitical or commercial change. And without rebuilding governance and operational processes each time. This has made portability and consistency critical.

A Database-Dedicated Platform, Not Just Infrastructure

What emerges from all of this is a shift in how database platforms are defined. Beyond running databases on infrastructure, databases must now be delivered through a dedicated platform experience. One where lifecycle automation, governance and recoverability are baked in, not added later.

When you take a platform approach, you can support multiple database engines, span hybrid environments and provide a single operational plane for teams. This allows infrastructure leaders to move beyond firefighting and towards standardised, compliant operations that scale.

Independent economic analysis from Forrester’s Total Economic Impact study supports what many organisations are already seeing in practice. When database operations are standardised, the benefits show up quickly. Faster delivery, less manual effort, and more consistent controls reduce day-to-day operational friction and lower risk. Often generating measurable returns earlier than traditional infrastructure-only programmes.

The modern mandate for infrastructure leaders

For today’s CIOs, CTOs and CISOs, the challenge is no longer where databases should run, but whether they are governed, recoverable and consistent by design. As digital services expand, AI initiatives place new demands on data, and regulatory scrutiny increases. Operational discipline becomes a leadership responsibility. In regulated environments, credibility is earned through evidence, with regulators and customers, and in the public sector it is earned with citizens.

Learn more at nutanixstore.co.uk

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on…

Children’s Mental Health Week 2026 spotlights the theme ‘This is My Place’. Tech charity founder James Tweed is calling on the UK’s IT departments to donate surplus laptops and devices to help some of the country’s most overlooked vulnerable children.

Rebooted

Tweed founded Rebooted to support the children of prisoners and provides laptops so they can learn at home.

“Having a parent in prison can be traumatic and often leads to a child struggling at school,” says Tweed. “If that child then falls behind digitally or is excluded from education, their long-term prospects narrow dramatically. It’s a vicious circle and we need to break it early.

“For many of these children, school is already unstable. If they also lack access to reliable technology at home, they’re starting from behind. In 2026, digital access isn’t a luxury, it’s foundational.”

A Practical Solution

With businesses refreshing hardware on regular cycles, Tweed believes IT leaders are sitting on a practical solution.

“Across the UK, thousands of perfectly usable laptops are sitting in storage cupboards or heading for recycling. Those devices could transform a child’s ability to learn, revise and stay connected to school.”

Crucially for IT heads, data security is central to the model. All donated devices are securely wiped and processed by Rebooted’s technology partner, GeTech, using certified data erasure procedures.

“Security is non-negotiable,” assures Tweed. “Every device is professionally wiped to recognised standards before it’s redeployed. IT teams can donate with complete confidence.”

Children’s Mental Health Week

Children’s Mental Health Week, launched in 2015, focuses this year on belonging and ensuring young people feel they have a place in their communities. Tweed argues that digital access plays a direct role in that sense of inclusion.

“We talk a lot about wellbeing and belonging,” he says. “But if a child can’t access homework platforms, revision tools or basic digital resources, they quickly feel excluded. Technology can either widen the gap — or help close it.”

Rebooted is now urging CIOs, IT directors and managed service providers to review surplus stock and consider structured donation programmes as part of their ESG and sustainability strategies.

“This is practical, measurable impact,” Tweed adds. “Instead of gathering dust, those devices can help ensure a vulnerable child can genuinely say, ‘This is my place.’”

IT leaders interested in donating surplus equipment can find more information at: rebooted.me

  • Cybersecurity
  • Digital Strategy
  • Infrastructure & Cloud
  • People & Culture

Interface issue 69 is live featuring Haleon, State of Montana, Techcombank, Publicis Sapient, Oakland County, Snowflake and much more

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

Haleon: A Bold Business Evolution

Digital & Tech Head Soumya Mishra reveals how the group behind power brands like Sensodyne, Panadol and Centrum, broke away from GSK and transformed so successfully. Haleon is itself a large organisation so separating from a huge parent company was a big challenge… “It was the biggest deal of its kind and the first to happen in this industry,” Mishra adds. “We were separating to create simplification, but we had to work hard to achieve that. There were a lot of processes and policies that didn’t make sense and needed an overhaul. This had to be backed by a culture shift that was properly communicated.”

State of Montana: Cybersecurity Through A New Lens

State of Montana CISO, Chris Santucci, explains the organisation’s drastic shift towards security, and how his team has become a shining example within the wider IT centralisation sphere… “Fixing security vulnerabilities came down to having built enough social capital and trust to correct. I like to stay slightly uncomfortable as a CISO and as a human, to keep challenging myself to deliver better services and greater value. The mission is to ensure Montana citizens get the support they need while keeping services secure and protecting data.”

Publicis Sapient: Driving Banking Transformations with AI

Financial Services Director Arunkumar Gopalakrishnan reveals how Publicis Sapient is developing the playbook for delivering successful AI-led digital transformations across the financial services landscape. “Working with Generative AI today feels like standing on a new frontier. It keeps us on our toes, but it’s also what drives us – to stay relevant, deliver outcomes and connect both worlds of business and technology.”

Techcombank:

Chief Strategy & Transformation Officer, PC Chakravarti explores the operating model, Data & AI foundations, culture and talent playbook, and the partnerships turning ambition into market leading outcomes at Techcombank in Asia. “Tech is not the limiting factor – it’s about supporting people and talent to leverage capabilities to enhance business models.”

Oakland County:

Sunil Asija, Director of Human Resources at Oakland County, talks building trust with collaboration and becoming employer of choice. “To build trust the culture needs to change from top to bottom, and it needs everyone to join in that good fight.”

Click here to read the latest edition!

  • Data & AI
  • Digital Strategy
  • Fintech & Insurtech
  • Infrastructure & Cloud
  • People & Culture

Fawad Qureshi, Global Field CTO, Snowflake, on realising possibilities for innovation in this new AI era

Without cloud migration, businesses face the end of innovation. In this new AI era, businesses operating within the closed architectures of legacy systems do not have the flexible, data-driven foundation to engage with these new technologies and ensure a strong pipeline of necessary innovation. And as AI continues to evolve, those not able to keep pace with innovation risk being left behind. 

Cloud migrations are the foundation to modernise and drive business growth over the long term. When organisations migrate to a cloud-based environment, it’s crucial to focus on the tangible business value a migration will deliver, rather than simply shifting from one system to another. Moving a company’s customer-facing applications and all of their data to a cloud-based environment has the benefits that are increasingly real and measurable.

Migration isn’t just a Plug and Play approach – Which migration fits your needs?

There are two approaches to cloud migration, broadly speaking: horizontal and vertical, each with their own benefits and potential challenges. A vertical approach sees organisations migrating applications one by one: this approach is a good choice if certain systems have to be prioritised, or if the applications being migrated do not have many interdependencies. Vertical migration allows for focused efforts and risk management on individual systems, and requires fewer resources. Horizontal migration moves entire system layers at the same time. This is the best solution when businesses have tight deadlines to retire legacy systems, or if their systems are tightly integrated. Horizontal migrations tend to be faster by allowing for parallel work streams, but they require more technical expertise. 

Organisations often adopt a mixture of the two approaches, for example, horizontally migrating important systems such as data platforms, while taking a vertical approach to customer-facing applications. Whatever approach an organisation takes, it’s vital that the migration also includes a culture shift, preparing employees to adapt to new, consumption-based models and the possibilities of the new technology. Migration is also just the start of the journey, unlocking the potential of AI-driven use cases and seamless data collaboration, including new ways to achieve business value. 

Before diving straight in, ensure it’s with a Data-First Mindset

When migrating to the cloud, a data-first approach is essential. For those acting as the catalyst for change, whether that be IT managers or even CIOs, data must be front of mind before planning any successful migration.  Understanding how data is used within the organisations, including its structure, governance needs, and how it delivers value and business outcomes, is imperative. This applies doubly when it comes to large, complex systems with many interconnected applications. 

Before migrating, businesses must comprehensively assess their current ecosystem. It’s imperative that the end-to-end business product survives the migration, intact. Organisations should maintain internal control over core competencies around data, such as business process knowledge, data governance and change management. These areas include institutional knowledge that external parties may not grasp. Businesses should also maintain direct oversight over compliance requirements and risk management. 

Technical activities such as cloud infrastructure optimisation, performance testing, and specialised migration tooling are something, by contrast, that can be handled by external expertise. Code conversion can also benefit from purpose-built tools that use technologies including AI. Technical parts of the immigration tend to evolve rapidly and require specialist knowledge, so are ripe for outsourcing. While doing so, those steering the migration need to ensure clear governance around outsourced activities, including regular knowledge transfer sessions. 

Different parts of the business all have a role to play: IT and engineering lead on technical implementation, handling the technical side of business requirements, while finance will identify ROI opportunities and manage cloud costs. It helps to create a cross-functional steering committee with representation from every department to ensure that different areas of the business are aligned and ready to address challenges. 

Adaptability and Flexibility is the key to business longevity 

Migration is never one-size-fits-all, and business leaders should be prepared to be flexible and adapt. There are multiple kinds of horizontal migration, from a simple ‘lift and shift’ focused on moving systems as they are to a ‘move and improve’ where migration is followed by optimisation to reduce technical debt. They should be ready to adapt at their own pace, choosing data platforms which offer agnostic architecture and the freedom to choose between data models and tools to ensure minimal disruption.

Flexibility is also important in choosing the tools used for migrations. Flexible data platforms will offer the support businesses need to deal with collaboration and governance frameworks. For businesses operating in EMEA, where different countries can have varying policies, pay close attention to issues around data quality, security and compliance, particularly when it comes to data sovereignty and issues around European data residency. 

A Shared Destiny

The shift to the cloud fundamentally changes security. The traditional cloud ‘shared responsibility’ model clearly demarcated duties between the provider and the customer. However, a more advanced approach is emerging: the ‘shared destiny’ model. This model recognises that in the event of a breach, reputational damage affects both parties. This shared risk incentivises the cloud provider to be a more proactive partner, actively helping customers strengthen their security posture rather than simply managing their own side of the demarcation line.

As ‘destinies’ intertwine, you help eliminate the vulnerability created due to password simplicity. Put simply, in a ‘shared responsibility’ model, the cloud provider is only responsible for securing infrastructure, while the customer remains responsible for securing data and apps in the cloud, as well as for configuration. In a ‘shared destiny’ model, the cloud provider plays a more proactive role to ensure that their customers have the best possible security posture. 

Taking a ‘shared destiny’ approach allows businesses to be more proactive in securing data, using approaches such as multi-factor authentication, secure programmatic access and more comprehensive cloud monitoring services. Choosing a modern, AI-driven data platform offers the best security foundations here, offering security controls across cloud service providers and the entire data ecosystem. 

A Pathway to Growth

In today’s world, the bigger risk is standing still. Nothing changes if nothing changes.

If organisations are holding back on innovation due to technological limitation, then the time to migrate is clear. There is no need to face an end to possibilities when the path towards success lies in reach, offering an opportunity to bring businesses up to date with modern requirements, and pave the way for the adoption of technologies such as AI. 

However, as we’ve seen, it’s not just a case of plug and play. Organisations must ensure a flexible, data-driven approach to migration, while keeping security front of mind via a ‘shared destiny’ approach. To deliver this, the right choice of a modern, flexible data platform will ensure the whole organisation can work together effectively and deliver a path to future innovation and growth. 

Learn more at snowflake.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Vertiv expects powering up for AI, Digital Twins and Adaptive Liquid Cooling to shape future Data Centre Design and Operations

Data Centre innovation is continuing to be shaped by macro forces and technology trends related to AI, according to a report from Vertiv, a global leader in critical digital infrastructure. The Vertiv™ Frontiers report, which draws on expertise from across the organisation, details the technology trends driving current and future innovation, from powering up for AI, to digital twins, to adaptive liquid cooling.

“The data centre industry is continuing to rapidly evolve how it designs, builds, operates and services data centres, in response to the density and speed of deployment demands of AI factories,” said Vertiv chief product and technology officer, Scott Armul. “We see cross-technology forces, including extreme densification, driving transformative trends such as higher voltage DC power architectures and advanced liquid cooling that are important to deliver the gigawatt scaling that is critical for AI innovation. On-site energy generation and digital twin technology are also expected to help to advance the scale and speed of AI adoption.”

The Vertiv Frontiers report builds on and expands Vertiv’s previous annual Data Centre Trends predictions. The report identifies macro forces driving data centre innovation:

  • Extreme densification – accelerated by AI and HPC workloads; gigawatt scaling at speed – data centres are now being deployed rapidly and at unprecedented scale
  • Data centre as a unit of compute – the AI era requires facilities to be built and operated as a single system
  • Silicon diversification – data centre infrastructure must adapt to an increasing range of chips and compute

The report details how these macro forces have in turn shaped five key trends impacting specific areas of the data centre landscape.

1.         Powering up for AI

Most current data centres still rely on hybrid AC/DC power distribution from the grid to the IT racks, which includes three to four conversion stages and some inefficiencies. This existing approach is under strain as power densities increase, largely driven by AI workloads. The shift to higher voltage DC architectures enables significant reductions in current, size of conductors, and number of conversion stages while centralising power conversion at the room level. Hybrid AC and DC systems are pervasive, but as full DC standards and equipment mature, higher voltage DC is likely to become more prevalent as rack densities increase. On-site generation, and microgrids, will also drive adoption of higher voltage DC.

2.          Distributed AI

The billions of dollars invested into AI data centres to support large language models (LLMs) to date have been aimed at supporting widespread adoption of AI tools by consumers and businesses. Vertiv believes AI is becoming increasingly critical to businesses but how, and from where, those inference services are delivered will depend on the specific requirements and conditions of the organisation. While this will impact businesses of all types, highly regulated industries, such as finance, defence, and healthcare, may need to maintain private or hybrid AI environments via on-premise data centres, due to data residency, security, or latency requirements. Flexible, scalable high-density power and liquid cooling systems could enable capacity through new builds or retrofitting of existing facilities.

3.          Energy autonomy accelerates

Short-term on-site energy generation capacity has been essential for most standalone data centres for decades, to support resiliency. However, widespread power availability challenges are creating conditions to adopt extended energy autonomy, especially for AI data centres. Investment in on-site power generation, via natural gas turbines and other technologies, does have several intrinsic benefits but is primarily driven by power availability challenges. Technology strategies such as Bring Your Own Power (and Cooling) are likely to be part of ongoing energy autonomy plans.

4.          Digital twin-driven design and operations

With increasingly dense AI workloads and more powerful GPUs also come a demand to deploy these complex AI factories with speed. Using AI-based tools, data centres can be mapped and specified virtually, via digital twins, and the IT and critical digital infrastructure can be integrated, often as prefabricated modular designs, and deployed as units of compute, reducing time-to-token by up to 50%. This approach will be important to efficiently achieving the gigawatt-scale buildouts required for future AI advancements.

5.          Adaptive, resilient liquid cooling

AI workloads and infrastructure have accelerated the adoption of liquid cooling. But conversely, AI can also be used to further refine and optimise liquid cooling solutions. Liquid cooling has become mission-critical for a growing number of operators but AI could provide ways to further enhance its capabilities. AI, in conjunction with additional monitoring and control systems, has the potential to make liquid cooling systems smarter and even more robust by predicting potential failures and effectively managing fluid and components. This trend should lead to increasing reliability and uptime for high value hardware and associated data/workloads.

Vertiv does business in more than 130 countries, delivering critical digital infrastructure solutions to data centres, communication networks, and commercial and industrial facilities worldwide. The company’s comprehensive portfolio spans power management, thermal management, and IT infrastructure solutions and services – from the cloud to the network edge. This integrated approach enables continuous operations, optimal performance, and scalable growth for customers navigating an increasingly complex digital landscape.

Find out more at Vertiv.com.

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Welcome to the latest issue of Interface magazine! Click here to read the latest edition! USDA: A Fresh Perspective on…

Welcome to the latest issue of Interface magazine!

Click here to read the latest edition!

USDA: A Fresh Perspective on Digital Service

This month’s cover story focuses on the digital transformation journey continuing at the United States Department of Agriculture (USDA). In conversation with Fátima Terry, USDA’s former Digital Service Deputy Director, we revisit the sterling work being carried out and find out how technology is being humanised to deliver value to the American people this organisation serves.

“One of the things we did was partner with multiple USDA teams that focused on customer experience and digital service delivery for their programs,” she explains. “We also partnered with other federal-wide agencies and departments to move forward and evaluate the progress of digital transformation by cross-pollinating success models to everyone connected.”

Ayoba: A Super-App for Africa

Ayoba, part of the MTN telco group, is a super-app platform built in Africa, for Africa. Esat Belhan, Chief Technology & Product Officer, reveals how it is bringing more people to digital so they can be tech-savvy and educated on digital capabilities…

“In order to do that, one thing you could do is give away free data, but that data could be easily wasted on another data-heavy app, like TikTok, in just a couple of hours. So, the real solution is that the valuable and insightful content Ayoba provides should be provided for free, and that we provide instant messaging and short video content, to keep people using our platform for their communication and entertainment needs.”

Kraft Kennedy: Supporting MSPs with People and Processes

Nett Lynch, CISO at Kraft Kennedy, explains how the company’s new division, Legion, solves cyber pain-points for MSPs with a collaborative, business-centred approach.

“A lot of MSPs struggle with client strategy, they’re talking tech instead of business. We’re nerds – we love the tech, we love the features. But we need to admit clients aren’t focused on those things. They don’t necessarily care how or why it works. They just want it to work and align to their business goals.”

And read on to hear from FICO’s CIO on using AI to transform technical operations; learn from KnowBe4 how AI Agents will be a game changer for tackling cybercrime; and discover how data centres are meeting the demands of the AI boom with Vertiv.

Click here to read the latest edition!

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud
  • People & Culture

Andy Swift, Cyber Security Assurance Technical Director at Six Degrees on

According to AV-TEST, the independent IT security institute, every day sees at least 450,000 new malware variants added to its database. In June this year, for example, cybercriminals are thought to have used malware to steal over 16 billion login credentials across various major platforms in what is thought to have been the largest breach of its kind in history. For security teams, this represents a relentless challenge that demands constant attention and consumes significant resources.

Malware-Free Attacks

As if that wasn’t enough, malware-free attacks are increasingly favoured by cybercriminals as a way to circumvent organisational security. Typically using legitimate programs and tools, these stealth attacks are particularly complex to detect. And they are invisible to most automated security protection options that are available to buy.

With no obvious malware signatures to detect, automated defences are often powerless to respond. And without robust security foundations, even advanced detection tools offer limited protection once an attacker gains a foothold. When that happens, the consequences can be significant.

At the heart of the matter are the limitations of many traditional security tools, which are simply not designed to stop what they cannot see. Malware-free attacks do not rely on external payloads or binaries with known malicious signatures. This renders many automated detection systems, including standard antivirus solutions, effectively useless. As a result, the burden falls elsewhere.

For most organisations, that means having the right expertise in place to recognise unusual behaviour, supported by technologies that can identify behavioural anomalies quickly. Endpoint detection and response (EDR) platforms offer some of these capabilities. But even the most advanced solutions rely on proper configuration and human oversight to be effective. In an ideal world, every business would have round-the-clock monitoring in place, but in reality, very few do.

Challenging Assumptions Around Risk

So, how can organisations fill the gap? When assessing how to protect against malware-free attacks, many organisations begin with the assumption that they will need to buy new tools or licenses. This can form part of a rounded solution. However, leading with this mindset often overlooks a more fundamental and cost-effective question: What can be improved with the tools already in place?

Reviewing existing capabilities should be the first step. For example, most environments already have some level of EDR, behavioural monitoring or identity protection deployed. Yet these are often underutilised or misconfigured. This can result from a lack of understanding around tool capabilities (and limitations), paying for the wrong level of license coverage, and failing to ensure configurations support behavioural analysis rather than just malware scanning. In many cases, even minor adjustments can significantly increase effectiveness without any additional spend.

Cost vs Risk

Organisations should also reconsider how they approach the question of investment. The cost vs risk conversation needs to shift from what they should buy to what they should fix. Even the most expensive detection tools can be rendered ineffective if attackers can exploit basic oversights such as poor configuration, excessive access rights or the absence of multi-factor authentication. In contrast, identifying and addressing these gaps in existing systems is not only more cost-effective but also more impactful in stopping attacks before they gain momentum.

This kind of review process is also an opportunity to identify gaps and prioritise actions that reduce risk without escalating costs. For example, many organisations find that network segmentation, strict privilege controls and enforcing least-access policies can help prevent lateral movement and minimise credential misuse – two of the most common techniques used in malware-free attacks. Putting these capabilities in place are security fundamentals that often determine whether an attack is stopped early or is able to spread.

In this context, a best practice approach matters more than ever. Not as a one-off initiative, but as a continuous effort to close the windows of opportunity that attackers rely on. This includes reducing privilege levels, adopting MFA by default, limiting binary access and educating users on social engineering techniques. All of which are good examples of cost-effective steps that can limit the opportunity for malware-free attacks to take hold. These are not headline-grabbing technologies, but they remain the strongest defence against attacks that thrive on poor hygiene and overlooked gaps.

So, rather than investing in yet another layer of detection, organisations should focus on strengthening what they already have. This approach not only helps avoid unnecessary expense but also delivers a stronger, more sustainable defence posture in an environment where threat actors continue to be extremely effective.

  • Cybersecurity
  • Cybersecurity in FinTech
  • Infrastructure & Cloud

TechEX Europe – Powering the Future of
Enterprise Technology at Amsterdam’s RAI Arena September 24-25

TechEx Europe unites five leading enterprise technology events — AI & Big DataCyber SecurityData CentresDigital Transformation and IoT — into one powerful experience designed for organisations driving change. Five events, two days, one ticket – register for your pass here.

From scaling infrastructure to unlocking new efficiencies, this is where decision-makers and their teams come to connect, explore real-world use cases, and discover the technologies that will shape their next phase of growth.

AI & Big Data Expo

The AI & Big Data Expo is the premier event showcasing Generative AI, Enterprise AI, Machine Learning, Security, Ethical AI, Deep Learning, Data Ecosystems, and NLP

Speakers include:

Cybersecurity & Cloud Expo

The Cyber Security & Cloud Expo, is the premier event showcasing the latest in Application and Cloud Security, Hybrid Cloud, Data Protection, Identity and Access Management, Network and Infrastructure Defence, Risk and Compliance, Threat Intelligence,  DevSecOps Integration, and more. Join industry leaders to explore strategies, tools, and innovations shaping the future of secure, connected enterprises.

Speakers include:

IOT Tech Expo

IoT Tech Expo is the leading event for IoT, Digital Twins & Enterprise Transformation, IoT Security, IoT Connectivity & Connected Devices, Smart Infrastructures & Automation, Data & Analytics and Edge Platforms.

Speakers include:

Digital Transformation

The Digital Transformation Expo is the leading event for Transformation Infrastructure, Hybrid Cloud, The Future of Work, Employee Experience, Automation, and Sustainability.

Speakers include:

Data Center Expo

The Data Centre Expo and conference is the premier event tackling key challenges in data centre innovation. It highlights AI’s Impact, Energy Efficiency, Future-Proofing, Infrastructure & Operations, and Security & Resilience, showcasing advancements shaping the future of data centre. 

Speakers include:

Book your place at TechEx Europe 2025 now!

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Event Newsroom
  • Events
  • Infrastructure & Cloud

Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery…

Accenture is helping SSEN Transmission manage hundreds of infrastructure projects vital to achieving the UK’s Net Zero ambition. Effective delivery required addressing fragmented data and disconnected tools that can slow the flow of information between systems. SSEN Transmission sought a partner to help reshape its approach for data-driven execution on capital projects.

Meeting the Digital Challenge with Accenture

SSEN Transmission partnered with Accenture to embrace automation and digitisation in response to increasing project demands, a challenge reflected across the wider Capital Projects sector. Through the adoption of BIM (Building Information Modelling) and the implementation of Integrated Project Management (IPM), which was developed with Oracle and Microsoft, this collaboration laid the groundwork for more connected ways of working and continues to promote transformation across the organisation.

Key Benefits Delivered

Accenture supported with IPM (Integrated Project Management) and Building Information Modelling (BIM) customised to meet specific needs and achieve key goals: 

  • Digitise processes for a single unified environment
  • Unify data for a standardised and trusted source of truth
  • Create a scalable platform for delivering capital projects

“With a unified real-time view of project data, SSEN Transmission has improved efficiency and strengthened collaboration across internal teams and with external partners. This allows for more time focused on higher value insight-led work, supporting better outcomes, faster decisions and much more agile delivery”

Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI

Building for the Future

More than a solutions provider, Accenture helps with strategy and issupporting SSEN Transmission’s continued focus on refining best practice for smooth project delivery. The partnership is helping to evolve ways of working and strengthening the digital foundation for future readiness.

“Our collaboration is built on a strong digital foundation that can scale with SSEN Transmission’s growing needs. By unifying systems, data, and process, we are enabling the faster adoption of new capabilities and supporting the shift towards a fully data-driven capital project delivery”

Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

Accenture: A Partner for the Journey

Transformation is a journey that begins with the right foundation across people, data and process. It also requires a digital partner that brings together the best of industry experience, process excellence and technology to:

  • Develop a clear, actionable strategy for digital and data transformation
  • Embed industry best practices to optimise processes and drive continuous improvement
  • Enable smarter, more consistent delivery aligned to a long-term vision, from strategy through to execution

And that’s where Accenture makes its mark, helping clients navigate the journey with confidence.

Learn more about how Accenture is supporting SSEN Transmission on its digitisation journey with Huda As’ad, Managing Director, Capital Projects & Infrastructure, UKI and Nithin Vijay, Managing Director, Industry X – Capital Projects & Infrastructure

  • Digital Strategy
  • Infrastructure & Cloud
  • Sustainability Technology

Tech Show London is coming to Excel March 12-13. Register for your free ticket now!

Unlock unparalleled value with a single ticket that gets you free access to five industry-leading technology shows. Welcome to Cloud & AI Infrastructure, DevOps Live, Cloud & Cyber Security Expo, Big Data & AI World, and Data Centre World.

Tech Show London has it all. Don’t miss this immersive journey into the latest trends and innovations.

Discover tomorrow’s tech today

Unleash Potential, Embrace the Future. Hear from the greatest tech minds, all in one place.

Dive into a world where cutting-edge ideas shape your tomorrow. Tech Show London is the epicentre of technology innovation in London and beyond, hosting the brightest minds in technology, AI, cyber security, DevOps, and cloud all under one roof.

The Mainstage Theatre is not just a stage; it’s a launchpad for innovative ideas. Witness a stellar lineup featuring world-renowned experts from across the tech stack, influential C-level executives, key government figures, and the vanguards of AI and cybersecurity. All ready to share ideas set to rock the industry.

GLOBAL INSPIRATION, LOCAL IMPACT

Seize the opportunity to be inspired by global visionaries. Furthermore, with speakers from the UK, USA, and beyond, prepare to be inspired by transformative concepts and actionable strategies from technology insiders, ensuring your business stays ahead in an ever-evolving technology landscape.

Where the future of technology takes the stage

Secure your competitive edge at Tech Show London, the UK’s award-winning convergence of the industry’s brightest tech minds.

On 12-13 March 2025, gain vital foresight into the disruptive technologies reshaping your market, and position your organisation at the forefront of technology’s next frontier.

If you’re defining your business’s tech roadmap, register for your free ticket to join us at Excel London.

Register for FREE

Register for your Ticket

  • Cybersecurity
  • Data & AI
  • Digital Strategy
  • Event Newsroom
  • Infrastructure & Cloud

Cybersecurity leader Shinesa Cambric on Microsoft’s innovation journey to identify, detect, protect, and respond to emerging threats against identity and access

This month’s cover story highlights a cybersecurity program protecting billions of users.

Welcome to the latest issue of Interface magazine!

Interface showcases leaders at the forefront of innovation with digital technologies transforming myriad industries.

Read the latest issue here!

Microsoft: Innovation in Cybersecurity

Shinesa Cambric is on a mission to drive innovation for cybersecurity at Microsoft. Moreover, by embracing diversity and opening all channels towards collaboration her team tackles anti-abuse and delivers fraud-defence. Continuous Improvement doesn’t just play into her role, it defines it…

“In the fraud and abuse space, attackers are constantly trying to identify ways to look like a legitimate user,” warns Shinesa. “And this means my team, and our partners, have to continuously adapt. We identify new patterns and behaviours to detect fraudsters. At the same time, we must do it in such a way we don’t impact our truly ‘good’ and legitimate users. Microsoft is a global consumer business and any time you add friction or an unpleasant experience for a consumer, you risk losing them, their business and potentially their trust. My team’s work sits on the very edge of the account sign up and sign in process. We are essentially the first touch within the customer funnel for Microsoft – a multi-billion dollar company.”

ABB: Digital Technolgies contributing towards Net Zero

Nigel Greatorex, Global Industry Manager for Carbon Capture and Storage (CCS) at ABB Energy Industries, explains how digital technologies can play a critical role in the transition to a low carbon world. He highlights the role of CCS in enabling global emissions reductions and how challenges can be overcome through digitalisation…

“It is widely recognised decarbonisation is essential to achieving net zero emissions by 2050. Therefore, it’s not surprising that emerging decarbonisation technology is becoming an increasingly important, and rapidly growing market.”

CSI: How can your IT estate improve its sustainability?

Andy Dunn, Chief Revenue Officer at IT solutions specialist CSI, reveals how digital technologies can contribute to ESG obligations: “Sustainability is a now seen as a strategic business imperative, so much so that 74% of companies consider Environmental, Social and Governance (ESG) factors to be very important to the value of their company. Additionally, we know almost three in four organisations have set a net zero goal. With an average target date of 2044, 50% of organisations are seeking more energy efficient products and services.”

https://www.youtube.com/watch?v=tsDaZiSO1ho

“Optimising energy use and consolidating servers and storage infrastructure form a strong basis for shaping a more environmentally friendly and efficient IT estate. It no longer needs to be the Achilles Heel of an ESG policy. “

Mia Platform: Sustainable Cloud Computing

Davide Bianchi, Senior Technical Lead at Mia Platform, explores the silver lining of sustainable cloud computing. He reveals how it can help us reduce our digital carbon thumbprint with collaboration, efficient use of applications, containerisation of apps, microservices and green partnerships.

“We’re already on an important technological path toward ubiquitous cloud computing. Correspondingly, this brings incredible long-term benefits too. These include greater scalability, improved data storage, and quicker application deployment, to name a few.”

Also in this issue, we hear from Doug Laney, Innovation Fellow at West Monroe and author of Infonomics and Data Juice. Also, we learn how companies can measure, manage and monetise to realise the potential of their data. And, Deputy CIO Melvin Brown discusses the people-centric approach to IT supporting America’s civil service at The Office of Personnel Management (OPM).

Enjoy the issue!

Dan Brightmore, Editor

  • Infrastructure & Cloud