Jon Abbott, Technologies Director of Global Strategic Clients at Vertiv, asks how we can build a generation of data centres for the AI age

The promise of artificial intelligence (AI) is enlightenment. The pressure it places on infrastructure is far less elegant.

Across every layer of the data centre stack, AI is exposing structural limits – from cooling thresholds and power capacity to build timelines and failure modes. What many operators are now discovering is that legacy models, even those only a few years old, are struggling to accommodate what AI-scale workloads demand.

This isn’t simply a matter of scale – it is a shift in shape. AI doesn’t distribute evenly, it lands hard, in dense blocks of compute that concentrate energy, heat and physical weight into single systems or racks. Those conditions aren’t accommodated by traditional data hall layouts, airflow assumptions or power provisioning logic. The once-exceptional densities of 30kW or 40kW per rack are quickly becoming the baseline for graphics processing unit- (GPU) heavy deployments.

The consequences are significant. Facilities must now support greater thermal precision, faster provisioning and closer coordination across design and operations. And they must do so while maintaining resilience, efficiency and security.

Design Under Pressure

The architecture of the modern data centre is being rewritten in response to three intersecting forces. First, there is density – AI accelerators demand compact, high-power configurations that increase structural and thermal load on individual cabinets. Second, there is volatility – AI workloads spike unpredictably, requiring cooling and power systems that can track and respond in real time. Third, there is urgency – AI development cycles move fast, often leaving little room for phased infrastructure expansion.

In this environment, assumptions that once underpinned data centre design begin to erode. Air-only cooling no longer reaches critical components effectively, uninterruptible power supply (UPS) capacity must scale beyond linear load, and procurement lead times no longer match project delivery windows.

To adapt, operators are adopting strategies that prioritise speed, integration and visibility. Modular builds and factory-integrated systems are gaining traction – not for convenience, but for the reliability that controlled environments can offer. In parallel, greater emphasis is being placed on how cooling and power are architected together, rather than as separate functions.

Exploring the Physical Gap

There is a growing disconnect between the digital ambition of AI-led organisations and the physical readiness of their facilities. A rack might be specified to run the latest AI training cluster. The space around it, however, may not support the necessary airflow, load distribution or cable density. Minor mismatches in layout or containment can result in hot spots, inefficiencies or equipment degradation.

Operators are now approaching physical design through a different lens. They are evaluating structural tolerances, rebalancing containment zones, and planning for both current and future cooling scenarios. Liquid cooling, once a niche consideration, is becoming a near-term requirement. In many cases, it is being deployed alongside existing air systems to create hybrid environments that can handle peak loads without overhauling entire facilities.

What this requires is careful sequencing. Introducing liquid means introducing new infrastructure: secondary loops, pump systems, monitoring, maintenance. These elements must be designed with the same rigour as the electrical backbone. They must also be integrated into commissioning and telemetry from day one.

Risk in the Seams

The more complex the system, the more attention must be paid to the seams. AI infrastructure often relies on a patchwork of new and existing technologies – from cooling and power to management software and physical access control. When these systems are not properly aligned, risk accumulates quietly.

Hybrid cooling loops that lack thermal synchronisation can create blind spots. Overlapping monitoring systems may provide fragmented data, hiding early signs of imbalance. Delays in commissioning or last-minute changes in hardware specification can introduce vulnerabilities that remain undetected until something fails.

Avoiding these scenarios requires joined-up design. From early-stage planning through to testing and operation, infrastructure must be treated as a whole. That includes the physical plant, the digital control layer and the operational processes that bind them.

Physical Security Under AI Conditions

As infrastructure becomes more specialised and high-value, the importance of physical security rises. AI racks often contain not only critical data but hardware that is financially and strategically valuable. Facilities are responding with enhanced perimeter control, real-time surveillance, and tighter access segmentation at the rack and room level.

More organisations are adopting role-based access tied to operational state. Maintenance windows, for example, may trigger temporary access privileges that expire after use. Integrated access and monitoring logs allow operators to correlate physical movement with system behaviour, helping to identify unauthorised activity or unexpected patterns.

In environments where automation and remote management are becoming standard, physical security must be designed to support low-touch operations with intelligent systems able to flag anomalies and initiate response workflows without constant human oversight.

Infrastructure as an Adaptive System

The direction of travel is clear. Infrastructure must be able to evolve as quickly as the workloads it supports. This means designing for flexibility and for lifecycle. It means understanding where capacity is needed today, and how that might shift in six months. It means choosing platforms that support interoperability, rather than locking into closed systems.

The goal is not simply to survive the shift to AI-scale compute. It is to build a foundation that can keep up with whatever comes next – whether that is a new training model, a change in energy market conditions, or a new set of regulatory constraints.

Discover more at vertiv.com

  • Data & AI
  • Digital Strategy
  • Infrastructure & Cloud

Jonathan Brander, COO at Upvest, on best practice for trading platform infrastructure

In the early hours of market turbulence, when retail investors are scrambling to respond, it’s not volatility that fails them: it’s infrastructure. In the past, we’ve repeatedly seen investing technologies buckle under pressure during moments of peak market stress. 

During times of high demand, many platforms might struggle to maintain uptime. In recent weeks, as Trump’s tariffs announcements saw retail trading volumes surge, some of the world’s biggest trading platforms went dark. These responses to market volatility are not outliers: they are predictablestress tests. Market volatility correlates strongly with spikes in trading volume. A study by the European Central Bank found that liquidity shocks consistently drive increases in trading activity, especially in frequently traded assets. Platforms should expect and be designed for these surges. 

Yet time and again, outages occur at precisely the moments when retail investors and advisers need control. In these moments, investors don’t merely lose access, they lose confidence.

Trading Platform Infrastructure

2024 poll found that 30% of UK banking customers would consider switching providers following a technology failure. Among 25-34 year olds, this figure jumps to 57%. For trading platforms (and their technology providers) trust is hard-won and easily lost. Operating in a financial market characterised by risk, investment infrastructure resilience is no longer a “nice-to-have”. It is a strategic necessity. 

According to McKinsey, global assets under management in private markets grew to $13.1 trillion in 2023. In the UK, over a third (39%) of adults are actively investing and the number is growing, thanks in part to government-led market reforms. As trading volumes increase, retail investors need infrastructure that doesn’t flinch under pressure. So what does this look like in practice?

First, elasticity is essential. Systems must be able to scale to meet demand spikes. When trading activity spiked following Trump’s tariff announcement, Upvest experienced the highest trading volumes in our history. Our platform scaled exactly as it was designed to do, enabling millions of Europeans to seamlessly trade and invest in thousands of instruments with zero downtime. At times of volatility, “stability as a service” emerges as a key competitive differentiator. 

Second, build for failure. The leading question in our conversations with clients is no longer “can you add this feature?”, it’s “can you guarantee uptime under pressure?” Financial institutions need to know that trading can continue in volatile conditions. Infrastructure providers must build with this in mind and leverage modular systems – where trading, settlement, and custody run independently – to reduce the risk that a single point of failure cascades across an entire platform. Decentralised services improve incident isolation and, in a digital-first financial ecosystem, reliable infrastructure that remains operational even when pressure peaks is the foundation of investor empowerment.

Observability is also key. Real-time monitoring allows operations and tech teams to anticipate issues before they become outages. This means constantly tracking latency, error rates, and system health, as well as regularly simulating and stress-testing for high volume scenarios to ensure systems can perform under extreme load. These synthetic tests mimic real-world event spikes and ensure you can deliver under pressure.

Finally, communicate transparently. When issues arise, investors deserve clear metrics on uptime and response windows. Public dashboards and incident post-mortems are no longer optional, they’re foundational to trust. At Upvest, for example, API Status is always available online so our clients can see whether we’re experiencing any issues.

Future Resilience

These steps are no longer operational best practice: they’re a necessity. The investment industry must move beyond treating volatility as an edge case and start building resilience into platforms as a priority. Retail investors don’t judge their investment providers during periods of calm, they judge them in crisis. When the market wobbles, infrastructure is the differentiator. That’s when confidence is earned and financial empowerment starts to happen.

  • Blockchain & Crypto
  • Digital Payments

UtterBerry, a tech giant whose innovations have been used on some of the largest infrastructure projects in the world, is…

UtterBerry, a tech giant whose innovations have been used on some of the largest infrastructure projects in the world, is bringing some of its operations to Leeds, Yorkshire, creating 800 jobs – as reported by the Yorkshire Post.

The business’s primary objective is producing sensors which monitor the movements of infrastructure – for example, bridges and tunnels – in real time. It allows those working on the infrastructure to be warned in advance if anything’s wrong, preventing potential accidents.

The new Leeds hub will also design and manufacturer contactless COVID-19 symptom scanners. UtterBerry is aiming to roll these out across the globe.

Heba Bevan, founder and CEO of UtterBerry, is keen to help those who lost their jobs during the pandemic find meaningful work again, and to attract more women into a typically male-dominated industry.

“What attracted me to Leeds was I knew there was a huge amount of talent around Yorkshire because you have got amazing universities,” she said.

“There is a huge pool of undergraduate and graduate talent.

“Engineers want to do good and provide sustainable developments. The pandemic showed us just how much we are lacking in manufacturing.”

Chancellor of the Exchequer, Rishi Sunak, said that the investment was “fantastic news for Leeds”.