Submer https://submer.com/ Datacenters That Make Sense Thu, 19 Mar 2026 08:32:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://submer.com/wp-content/uploads/2025/05/cropped-icon-site-32x32.png Submer https://submer.com/ 32 32 A game-changing inflection point – introducing Rapid Edge AI Infrastructure https://submer.com/blog/rapid-edge-ai-infrastructure-with-zededa/ Wed, 18 Mar 2026 09:41:44 +0000 https://submer.com/?p=27295 Submer and ZEDEDA introduce rapid edge AI infrastructure that enables real-time inference anywhere. Combining modular liquid-cooled hardware with intelligent orchestration software, organizations can deploy high-density AI infrastructure in days—even in remote and extreme environments.

The post A game-changing inflection point – introducing Rapid Edge AI Infrastructure appeared first on Submer.

]]>
The announcement of a strategic partnership between Submer and ZEDEDA marks an inflection point. The news will be game changing for any business, organisation or government that generates high volumes of data but is unable to draw real time inference from it because the datacenter is too far away.

The inability to process AI at the edge will soon be a thing of the past, as this strategic partnership sees the creation of a new joint solution architecture, combining Submer hardware and infrastructure with ZEDEDA Edge Intelligence Platform. The partnership will enable rapid edge AI infrastructure anywhere, at any scale, and operating where traditional datacenters cannot – regardless of environment and even in extreme temperatures up to +45oC/110oF.

Not only can the edge AI deployments be located anywhere they are needed, but the Submer-ZEDEDA collaboration is charting new territory: integrated edge AI appliances able to run real-time inference within days, not months. That speed from deployment to operational intelligence is a whole new ball game.

Submer & ZEDEDA, rapid field-deployable Edge AI

The first wave of AI was centralized with all data returning to hyperscale clouds for processing. But the world’s most critical operations generate enormous volumes of data far from any datacenter; it comes from factory floors, offshore platforms, telecommunications networks or remote energy sites, to name a few examples.

There is a great need to make intelligence possible locally in the local physical world, but this is a huge infrastructure challenge. You can’t simply shrink a datacenter and ship it to a mine site. You need purpose-built infrastructure that can handle the thermal loads of GPU-dense compute in environments that would normally be considered inhospitable.

Hardware alone isn’t enough and that’s why the Submer-ZEDEDA partnership exists. Submer’s liquid-cooled modular systems make it physically possible to run high-density AI inference anywhere in the world and ZEDEDA brings software-defined resilience and intelligent orchestration. This is critical because in production, high availability means expensive hardware redundancy at every site. ZEDEDA’s orchestration layer shifts that to software, detecting node failures and redistributing workloads automatically across the cluster.

ZEDEDA’s role in the partnership dramatically reduces the cost and complexity of going from one pilot site to fifty production sites. It’s the difference between artisanal deployment and industrial-scale operations.

Scale from edge to megawatt

The scalability of edge AI infrastructure can be a hurdle, but Submer’s modular, pre-validated infrastructure is designed so that an organization can deploy high-density GPU inference at a new site without redesigning the infrastructure each time. It’s repeatable, it’s tested and it scales.

The modular form factors are:

  • Pods for Compact Edge – up to 8 GPUs configured per server for on premise industrial & 5G telecom sites. Deliver real-time vision, predictive maintenance and local inference.
  • Packs – ruggedized Micro-DC. Up to 168 GPUs configured for energy, mining, ports & manufacturing environments. Deliver Industrial automation, agentic AI and multi-model inference.
  • Containers for megawatt scale – up to 800 GPUs with 10-, 20-, 40-foot configurations for sovereign AI & GPUaaS operators. Deliver large-scale inference, sovereign AI and GPU-as-a-Service.

Under the hood

Submer and ZEDEDA are leaders in their respective fields, bringing their expertise together to create a simplified architecture that is a step change in edge AI infrastructure.

Submer contributes liquid-cooled, modular infrastructure; power and thermal UPS, PDUs, chillers and power modules, as well as hardware integration with pre-validated and improved GPU utilization and server configurations.

ZEDEDA’s Edge Intelligence Platform provides auto workload redistribution orchestration to deploy and manage AI applications at scale; Software defined resilience for orchestration-level failover without hardware redundancy, lowering total cost of ownership; zero trust security for secure provisioning attestation and encryption. All ZEDEDA’s intelligence appliances are pre-integrated, validated and ready to deploy allowing the rapid distribution of edge nodes anywhere and – crucially – with centralized orchestration across them all.

The combined advantages are impressive:

  • 100kW+ per rack density thanks to Submer’s liquid cooling technology
  • Zero water consumption as Submer’s cooling eliminates water use entirely
  • 40% lower CO2 compared to traditional air-cooled facilities
  • <1.03 PUE certified power efficiency from liquid cooling

A new opportunity

AI intelligence at the edge can unlock massive data-driven opportunities across industries, organizations and governments but it is only as good as the infrastructure that powers it. Without purpose-built, deployable and sustainable compute infrastructure, none of it reaches the real world. We are proud to be working with ZEDEDA to help bring these opportunities to reality.

The post A game-changing inflection point – introducing Rapid Edge AI Infrastructure appeared first on Submer.

]]>
ZEDEDA and Submer partner to deliver rapid field-deployable integrated Edge AI infrastructure anywhere https://submer.com/blog/press/partnering-with-zededa-to-deliver-rapid-field-deployable-integrated-edge-ai-infrastructure-anywhere/ Tue, 17 Mar 2026 12:23:02 +0000 https://submer.com/?p=27285 ZEDEDA and Submer launch modular edge AI infrastructure combining liquid cooling and edge intelligence to deploy high-density GPU inference anywhere.

The post ZEDEDA and Submer partner to deliver rapid field-deployable integrated Edge AI infrastructure anywhere appeared first on Submer.

]]>
Barcelona, Spain – March 17, 2026 – ZEDEDA, the leader in edge intelligence, and Submer, the market-leading end-to-end AI infrastructure company,  today announced a strategic partnership to deliver rapid manufacturable modular, liquid-cooled edge AI infrastructure for high-density GPU inference in locations where traditional data centers are unavailable or impractical.

The joint solution combines Submer’s full-stack AI infrastructure platform - spanning design, liquid-cooled compute infrastructure, and deployment that supports ultra-high-density racks exceeding 100kW – with ZEDEDA’s edge intelligence software platform, enabling customers to create, secure, and operate edge AI anywhere in the world, and at any scale.

As AI workloads increasingly move from centralized cloud infrastructure to industrial and operational environments, organizations require high-density compute infrastructure that can be rapidly deployed outside traditional datacenter facilities. Enterprises, service providers, and nations can now deploy fully integrated and validated high-density GPU inference infrastructure anywhere intelligence is needed — on factory floors, at energy sites, across telco aggregation points, and in sovereign environments — without the constraints, cost, or lead times of traditional AI datacenters.

Said Ouissal, CEO and founder of ZEDEDA said:

“As intelligence moves from the cloud into the physical world, the ability to run AI anywhere – in a remote factory, an offshore platform, or telecommunications networks – is a fundamental requirement. The world’s most critical operations generate enormous volumes of data far from any data center, and until now, the infrastructure to act on that data intelligently simply couldn’t follow. Our collaboration with Submer makes that possible now. ZEDEDA’s Edge Intelligence Platform ensures high-performance AI workloads at the edge are managed, secure, and scalable, and Submer’s liquid cooling technology enables the high-density compute those workloads demand, even in the harshest global environments. Together, we are unlocking AI for the industries that need it most.”

The companies plan to offer three modular form factors initially:

  • Pods: Compact edge deployments supporting up to 8 GPUs or edge AI inference cards per server for on-premise industrial and 5G telecom sites.
  • Packs: Ruggedized micro-data center configurations supporting up to 168 GPUs for energy, mining, ports and manufacturing environments.
  • Containers: Megawatt-scale, liquid-cooled solutions supporting up to 800 GPUs in 10-, 20-, or 40-foot configurations for sovereign AI and GPU-as-a-service operators as well as locations where cloud or network access are impractical.

The modular form factors are designed to support a range of AI workloads at the edge, including real-time computer vision, predictive maintenance, industrial automation and emerging agentic AI applications that require local inference and decision-making.

Submer will provide modular containerized infrastructure with immersion and direct-to-chip cooling designed for high-density GPU deployments. ZEDEDA’s Edge Intelligence Platform will provide complete edge AI lifecycle orchestration, to enable creating, securing, and operating edge AI at scale. The solutions will offer a selection of pre-selected validated hardware and GPU partners, along with the option for customers to bring their own hardware systems.

A core architectural principle of the joint solution is software-defined resilience. Instead of relying solely on hardware redundancy, ZEDEDA’s infrastructure orchestration layer detects node failures and redistributes workloads at the cluster level to maintain service targets. This approach simplifies the system architecture, improves GPU utilization and lowers the total cost of ownership.  Submer’s liquid cooling technology significantly reduces cooling energy requirements compared to traditional air-cooled infrastructure while also eliminating water consumption and supporting more sustainable AI infrastructure deployments. This allows for deployment anywhere in the world, regardless of the environment.

Patrick Smets, CEO of Submer commented:

“AI is rapidly moving from centralized cloud environments into real-world operations, from industrial sites to telecom networks and remote energy infrastructure,” said Patrick Smets, CEO of Submer. “Delivering that intelligence requires purpose-built AI infrastructure that operates efficiently in environments where traditional data centers simply cannot exist. By combining Submer’s liquid-cooled high-density AI infrastructure with ZEDEDA’s edge intelligence platform, we’re enabling organizations to deploy scalable, resilient AI infrastructure anywhere it is needed.”

The companies are engaging initial industrial and telecommunications customers and expect pilot deployments later this year.

For more information, get in touch with us here.

The post ZEDEDA and Submer partner to deliver rapid field-deployable integrated Edge AI infrastructure anywhere appeared first on Submer.

]]>
Submer appoints former BSNL Chairman Anupam Shrivastava as Principal Advisor on Government & Green AI Initiatives in India https://submer.com/blog/press/former-bsnl-chairman-anupam-shrivastava-as-principal-advisor-on-government-green-ai-initiatives-in-india/ Thu, 12 Mar 2026 09:02:54 +0000 https://submer.com/?p=27274 Submer has announced new UK partnerships with Hammer Distribution and Boston Ltd. to expand access to liquid cooling and full-stack AI datacenter infrastructure across the UK and Europe.

The post Submer appoints former BSNL Chairman Anupam Shrivastava as Principal Advisor on Government & Green AI Initiatives in India appeared first on Submer.

]]>
Barcelona, Spain – March 12, 2026 – Submer, a market leader in AI datacenter infrastructure, is announcing the appointment of Anupam Shrivastava as a Principal Advisor for Government & Green AI Initiatives in India. His role will be to advise on the deployment of sustainable AI infrastructure and advanced liquid-cooling solutions for the Indian datacenter market, with a special focus on Madhya Pradesh, Maharashtra and Andhra Pradesh, alongside Central Government initiatives in New Delhi.

As demand for AI infrastructure accelerates globally, countries are increasingly focused on deploying sovereign AI capacity that is both high-performance and energy-efficient. Submer delivers the full-stack of AI datacenter infrastructure, from advanced liquid cooling and high-density compute architecture to design, deployment and operational expertise – enabling organizations to scale AI workloads sustainably. In India, where rapid digital expansion and government-led initiatives are driving new datacenter investment, sustainable infrastructure will be critical to supporting the country’s growing AI ecosystem.

As a government relations, policy and regulatory compliance specialist, Anupam will facilitate high-level dialogues with government bodies to align liquid cooling technology with India’s ‘Green IT’ and energy efficiency mandates. He will also steer business development strategies to integrate eco-friendly, high-density computing solutions within national digital infrastructure projects.

Dev Tyagi, President of UKI, India and Asia at Submer said:

“We are honoured to have Anupam take on the role of Principal Advisor for Government and Green AI Initiatives in India. His experience and deep expertise in engaging with government stakeholders and business leaders make him invaluable when it comes to embracing strategic insights that support Submer’s continued growth across India.”

Before joining Submer in an advisory role, Mr Shrivastava was Chairman and Managing Director of Bharat Sanchar Nigam Limited (BSNL) from 2015 to 2019. During his tenure, he directed national-scale projects exceeding ₹100,000 crore (approx. USD 14 billion) including BharatNet; the Network for Spectrum (NFS) for the Ministry of Defence; and the LWE (left wing extremist) connectivity project for internal security. In 2017, he received Prime Minister Shri Narendra Modi Ji’s public commendation, during his Independence Day speech, for BSNL’s turnaround following the company returning to operational profitability in three consecutive fiscal years (2015–2017) despite intense market disruption.

Anupam commented:

“India is accelerating toward a massive AI-driven digital future and the foundation of this growth must be sustainable datacenter infrastructure. I am thrilled to join Submer at this pivotal moment. Submer’s zero-water consumption and energy-efficient liquid cooling technology is a critical enabler for India’s Sovereign AI infrastructure. I look forward to working closely with government and enterprise leaders to build a truly green, high-density digital ecosystem for the nation.”

With Mr Shrivastava’s appointment, Submer strengthens its engagement with public-sector stakeholders and reinforces its commitment to building the infrastructure foundation required for India’s next generation of AI-driven innovation.

For more information, get in touch with us here.

The post Submer appoints former BSNL Chairman Anupam Shrivastava as Principal Advisor on Government & Green AI Initiatives in India appeared first on Submer.

]]>
AI on the edge https://submer.com/blog/ai-on-the-edge/ Fri, 06 Mar 2026 09:20:52 +0000 https://submer.com/?p=27258 Edge AI infrastructure is reshaping how compute is deployed. Core-to-edge architectures enable low-latency applications, new telecom revenue models, and the foundations for sovereign AI.

The post AI on the edge appeared first on Submer.

]]>
A few weeks ago, we highlighted several key datacenter trends for 2026. With Mobile World Congress 2026 just finished – where edge compute took center stage, it’s worth taking a closer look at one of those trends: the era of edge AI infrastructure.

Processing data closer to the source dramatically reduces latency, creating faster and more seamless digital experiences. While that advantage is valuable in almost any digital environment, for some applications, low latency is not simply beneficial; it is essential.

Why edge AI infrastructure matters

Cloud gaming is a clear example. Instead of running a game locally, players stream the experience from a remote GPU server. Any noticeable delay between player input and game response can disrupt gameplay, making latency one of the most critical performance factors.

Running those GPU workloads on servers located as close as possible to the user dramatically improves performance. With the recent acquisition of Radian Arc, Submer is already seeing the impact of edge-based GPU infrastructure embedded within telecom networks, enabling cloud gaming services for telcos and their customers.

Edge AI across industries

Distributed edge AI infrastructure allows organizations to process and analyze data closer to where it is generated, enabling faster insights and responses. Edge AI is becoming increasingly important across industries. There are myriad use cases.

• Automotive systems using AI for real-time navigation and driver monitoring
• Industrial IoT enabling predictive maintenance and production quality control
• Healthcare wearables providing immediate analysis of patient data
• Smart cities using connected sensors to manage traffic and security
• Retail and agriculture applications driven by real-time data from sensors and devices

Core-to-edge AI architecture

When it comes to AI infrastructure, the most effective model is emerging as a core-to-edge architecture, with large cloud datacenters supported by networks of edge compute nodes.

These edge nodes provide a huge opportunity for telecom companies to expand their offering, moving beyond connecting users to cloud compute and instead providing the compute itself. Large-scale AI datacenters provide centralized compute power, while distributed edge nodes deliver real-time inference and localized processing. This shift creates a major opportunity for telecom operators.

A new opportunity for telecom operators

Telcos can implement edge compute nodes throughout their networks, delivering localised GPU-as-a-Service solutions that provide low-latency data processing to local users. That GPU bandwidth can be utilised and monetized as required, whether that be for cloud gaming subscriptions or AI inferencing workloads. But that localised GPU compute also opens the door to a very important opportunity – AI sovereignty.

Countries and territories all over the globe are beginning to worry about their reliance on foreign entities to provide the AI and cloud infrastructure that their citizens need. Ensuring that the full AI stack that a country or territory relies on is wholly owned and operated within that territory is key to AI resilience.

But that’s only half of the issue. Data regulations like the EU’s GDPR insist that personal data is stored and processed locally, with tight controls on access. However, legislation such as the US Cloud Act empowers US law enforcement to compel US cloud companies to provide data – including personal data – regardless of where that data is stored and processed. The result could be a legal conflict between territorial regulation and legislation.

EU countries that build out full-stack AI solutions, however, can ensure AI resilience, while also avoiding any conflicts of data regulation that could arise from engaging with foreign cloud providers.

Telecom companies can deploy edge compute nodes at scale, providing a relatively simple solution to the sovereign AI challenge, while also laying the foundation for valuable new revenue models.

Submer & inferX enabling the core-to-edge AI infrastructure

Submer and its AI cloud and edge company, inferX are helping enable the transition toward distributed AI environments.

Submer deliver the full-stack AI functionality to telcos, providing new customer solutions and revenue streams, as well as creating a robust sovereign AI roadmap that gives nations the resilience and regulatory defense that they require. We’ve built an ecosystem that’s designed to deliver the core-to-edge infrastructure to power full-stack sovereign AI compute solutions, from large-scale high-density datacenters at the core, to low-latency AI factories at the edge.

Our design and build capabilities enable us to deploy large-scale AI datacenters at speed, while our NVIDIA Cloud Partner, inferX, is positioned to leverage that infrastructure to deliver AI-as-a-Service solutions at scale. And our recent acquisition of Radian Arc provides an established footprint across the telco landscape that’s already driving strong revenue through cloud gaming.

The age of AI is accelerating, and the need to own and control that AI infrastructure has never been more important. At Submer, we understand the need for sovereign AI solutions at both national and enterprise levels, and we can deliver the core-to-edge infrastructure to power it.

The post AI on the edge appeared first on Submer.

]]>
Submer strengthens UK presence with new partners to accelerate liquid cooling adoption https://submer.com/blog/press/strengthening-uk-presence-with-new-partners-to-accelerate-liquid-cooling-adoption/ Wed, 04 Mar 2026 11:29:44 +0000 https://submer.com/?p=27239 Submer has announced new UK partnerships with Hammer Distribution and Boston Ltd. to expand access to liquid cooling and full-stack AI datacenter infrastructure across the UK and Europe.

The post Submer strengthens UK presence with new partners to accelerate liquid cooling adoption appeared first on Submer.

]]>
Barcelona, Spain – March 4, 2026Submer, the market-leading end-to-end AI infrastructure company, has today announced new UK partnerships as part of a broader expansion strategy, aligned with the Government’s focus on sovereign, sustainable digital infrastructure.

Submer enables organizations to scale AI growth by delivering full-stack AI infrastructure services, spanning liquid cooling solutions, monitoring software, datacentre design and build, and GPU cloud services. This integrated approach provides UK organisations with a single accountable partner for AI growth. To strengthen local availability, Submer has signed agreements with Hammer Distribution, an enterprise IT distributor, and Boston Ltd., a provider of high-performance computing and AI infrastructure solutions. The collaborations expand UK and European access to Submer’s liquid cooling portfolio alongside associated design, deployment and lifecycle support services.

Manpreet Bath, Vice President for Commercial Engagement at Submer, said:

“The UK is one of Europe’s most strategically important AI infrastructure markets. Strengthening our local partnerships ensures organisations can deploy energy-efficient, high-density environments while maintaining control over where their data is processed and governed.”

With datacentres now classified as Critical National Infrastructure and capacity forecasted to nearly double by 2028, demand for high-density, energy-efficient compute infrastructure is accelerating across sectors including financial services, healthcare and higher education. The UK datacenter market, Europe’s largest, has attracted more than £40bn in investment since 2023, according to Oxford Economics.

Adam Blackwell, Director of AI, Server, and Advanced Technology at Hammer, said:

“As AI workloads accelerate across Europe, the channel faces growing demand for high-density, energy-efficient infrastructure. Traditional air cooling is no longer sufficient, making liquid and cooling increasingly relevant beyond hyperscale. Hammer takes a consultative, ecosystem-led approach to AI infrastructure. Through our partnership with Submer, now part of Hammer’s AI WORKS program launching in early April, we strengthen local enablement and deployment capability, empowering European partners to deliver scalable, sustainable AI-ready solutions.”

Manoj Nayee, Managing Director at Boston Limited, commented:

“AI and HPC workloads are driving a transformation in infrastructure design throughout the UK. With increasing power densities and heightened performance demands, conventional cooling methods can no longer keep pace. Liquid cooling is swiftly becoming a necessity for providing efficient, scalable, and sustainable high-performance environments. By leveraging Boston’s extensive experience in AI and HPC system integration alongside Submer’s AI datacenter expertise and solutions, we empower organisations to implement sovereign, AI-ready infrastructure that satisfies the UK’s growing needs while minimising energy consumption.”

The expansion supports the UK’s data sovereignty objectives, ensuring strategic and sensitive data generated in the UK can be stored, processed and governed domestically. Datacenter operators reduce energy consumption, lower operational costs and meet sustainability targets through solutions designed for zero direct water consumption.

Enabling the UK’s sovereign AI infrastructure growth

As AI adoption accelerates, infrastructure decisions are increasingly shaped by energy constraints, regulatory requirements and sovereignty considerations. Submer’s UK expansion reinforces its commitment to supporting public and private sector organisations building scalable AI capacity within the UK. The partnerships form part of Submer’s wider UK investment, including expansion of the local team, with technical and commercial positions currently open to support customer engagement and in-market expertise.

For more information, get in touch with us here.

The post Submer strengthens UK presence with new partners to accelerate liquid cooling adoption appeared first on Submer.

]]>
Radian Arc, VNPT and Blacknut launch GPU infrastructure in Vietnam, enabling cloud gaming and AI services https://submer.com/blog/press/radian-arc-vnpt-and-blacknut-launch-gpu-infrastructure-in-vietnam-enabling-cloud-gaming-and-ai-services/ Wed, 04 Mar 2026 10:14:41 +0000 https://submer.com/?p=27234 TM Global, Telekom Malaysia’s wholesale arm, unveils MYNE, Malaysia’s first digital infrastructure marketplace designed to accelerate connectivity and digital ecosystem growth.

The post Radian Arc, VNPT and Blacknut launch GPU infrastructure in Vietnam, enabling cloud gaming and AI services appeared first on Submer.

]]>
Barcelona, Spain – March 4, 2026Radian Arc, part of inferX, Submer’s AI cloud and GPU infrastructure platform has partnered with VNPT, and COMIT, to launch Cloud Gaming powered by Radian Arc’s GPU Edge Platform and Blacknut’s global cloud gaming service.

This deployment expands Radian Arc and Blacknut’s global cloud gaming partnership into the Vietnam market with VNPT and lays the foundation for future AI-native services and sovereign infrastructure together with COMIT. The deployment represents a commercial proof point of Radian Arc’s carrier-embedded GPU model, combining monetizable consumer services today with scalable AI infrastructure that can support sovereign AI workloads.

David Cook, CEO of Radian Arc, said:

“With VNPT’s market reach and Blacknut’s premium gaming catalog we’re bringing the next generation of interactive entertainment and computing directly to Vietnam’s 5G users,” said David Cook, CEO of Radian Arc. “Cloud gaming is often the first large-scale consumer application of edge GPU infrastructure, and it creates the foundation for broader AI and enterprise services built on the same sovereign platform.”

Olivier Avaro, CEO of Blacknut, said:

“At Blacknut, we are always thrilled to work hand in hand with telecom operators to bring cloud gaming to new audiences. Expanding into Vietnam with VNPT is an important step for us, and a real pleasure to collaborate with such a strong and forward-looking partner,” said Olivier Avaro, CEO of Blacknut, “This launch reflects our shared ambition to make high-quality gaming accessible to everyone, across devices and without barriers.”

Through this launch, VNPT users can instantly stream and play over 1000 premium PC and console-quality games from Blacknut’s curated catalog, directly on mobile devices, smart TVs, and PCs without the need for expensive hardware or downloads. The service is bundled with VNPT’s consumer data plans, offering customers seamless access to cloud gaming as part of their existing connectivity packages.

The partnership also lays the groundwork for Radian Arc’s AI Points of Presence (AI PoPs) across Vietnam as part of its AI Sovereign Infrastructure model. These localized GPU deployments are designed to provide secure, in-country AI inferencing capacity for governments, enterprises, and developers, ensuring compliance with national data residency requirements while delivering low-latency performance.

Mr. Nguyen Duc Hung, Director, VNPT, said:

“This collaboration represents a significant milestone for VNPT in expanding Vietnam’s digital media ecosystem,” said Mr. Nguyen Duc Hung, Director, VNPT. “By combining VNPT’s strong digital media platform with Radian Arc’s GPU-powered edge and Blacknut’s extensive gaming catalog, we are empowering Vietnam’s consumers with cutting-edge cloud gaming experiences while paving the way for future innovation in AI and digital entertainment.”

Phạm Ngọc Tú, Director at COMIT, commented:

“As we evolve into a new-generation digital enterprise, COMIT is proud to be the integration partner bringing cloud gaming services powered by the world’s most advanced GPU infrastructure platform to Vietnam. This partnership exemplifies our new vision: delivering next-generation entertainment experiences to users today, while building a solid technological foundation for sovereign AI infrastructure and future cloud services.”

This launch forms part of Radian Arc’s broader Southeast Asia expansion strategy, following GPU edge deployments in India, Malaysia, Singapore, Thailand and Indonesia. As part of inferX and Submer, Radian Arc is building a unified core-to-edge AI platform across the region – combining carrier-edge GPU compute, AI cloud services, and infrastructure designed for sovereignty, performance, and energy efficiency.

For more information, get in touch with us here.

The post Radian Arc, VNPT and Blacknut launch GPU infrastructure in Vietnam, enabling cloud gaming and AI services appeared first on Submer.

]]>
Radian Arc, a Submer company, and Datasamudra sign Memorandum of Understanding to deploy GPU-as-a-Service and AI solutions in Karnataka https://submer.com/blog/press/radian-arc-and-datasamudra-sign-memorandum-of-understanding-to-deploy-gpu-as-a-service-and-ai-solutions-in-karnataka/ Wed, 18 Feb 2026 04:30:00 +0000 https://submer.com/?p=27182 Radian Arc Limited, part of InferX, Submer’s AI cloud and GPU infrastructure platform, has signed a Memorandum of Understanding (MOU) with TeleIndia Datacentre Private Limited (Datasamudra), a leading provider of Colocation , GPU , and cloud services in India.

The post Radian Arc, a Submer company, and Datasamudra sign Memorandum of Understanding to deploy GPU-as-a-Service and AI solutions in Karnataka appeared first on Submer.

]]>
Barcelona, Spain – February 18, 2026Radian Arc Limited, part of InferX, Submer’s AI cloud and GPU infrastructure platform, has signed a Memorandum of Understanding (MOU) with TeleIndia Datacentre Private Limited (Datasamudra), a leading provider of Colocation , GPU , and cloud services in India. The collaboration aims to establish Radian Arc’s GPU-as-a-service (GPUaaS) platform within Datasamudra’s infrastructure in Karnataka, enabling enterprises and government entities to access advanced AI driven services and high-performance computing locally.

Through this MoU, Radian Arc will establish a GPU Point-of-Presence (POP) inside Datasamudra’s facility. The partnership will also focus on supporting enterprise and state government workloads, including AI and data analytics, sovereign compute, and public-sector digital services, while ensuring compliance with India’s data residency and sovereignty requirements.

Key highlights of the partnership:

  • Infrastructure Deployment: Radian Arc will deploy GPU-based edge infrastructure within Datasamudra’s facility located at KIADB IT Park Devanahalli , North Bangalore.
  • Government & Enterprise Engagement: Datasamudra will lead local engagement with enterprises and government departments to identify eligible workloads.
  • Joint Collaboration: Both companies will work together to position Data Samudra as the local connectivity, hosting, and government engagement partner.
  • Regulatory Alignment: The collaboration will ensure compliance with India’s data sovereignty and residency regulations.

David Cook, CEO at Radian Arc, commented:

“Our partnership with Datasamudra marks an important step in expanding our GPU platform in India. By combining Radian Arc’s technical expertise and Datasamudra’s local infrastructure and government engagement, we can deliver secure, scalable and transformative AI solutions to enterprise and public sector organisations across Karnataka.”

Mahanthesh KA -MD CEO said

“We are excited to partner with Radian Arc to bring cutting-edge GPU and AI capabilities into our datacenters This collaboration strengthens our ability to support enterprises and government departments with advanced technology solutions that drive innovation and digital transformation in the region.”

For more information, get in touch with us here.

The post Radian Arc, a Submer company, and Datasamudra sign Memorandum of Understanding to deploy GPU-as-a-Service and AI solutions in Karnataka appeared first on Submer.

]]>
Radian Arc, a Submer company, and GTPL Broadband sign Letter of Intent to deploy GPU-as-a-Service and AI solutions across India https://submer.com/blog/press/radian-arc-and-gtpl-broadband-sign-letter-of-intent-to-deploy-gpu-as-a-service-and-ai-solutions-across-india/ Tue, 17 Feb 2026 15:07:49 +0000 https://submer.com/?p=27178 Radian Arc Limited, now part of InferX, Submer’s AI cloud and GPU infrastructure platform, has signed a binding Letter of Intent (LOI) with GTPL Broadband Pvt. Ltd., a leading provider of telecommunication and cloud services in India.

The post Radian Arc, a Submer company, and GTPL Broadband sign Letter of Intent to deploy GPU-as-a-Service and AI solutions across India appeared first on Submer.

]]>
Barcelona, Spain – February 17, 2026Radian Arc Limited, now part of InferX, Submer’s AI cloud and GPU infrastructure platform, has signed a binding Letter of Intent (LOI) with GTPL Broadband Pvt. Ltd., a leading provider of telecommunication and cloud services in India. The partnership will see Radian Arc deploy its GPU-as-a-Service (GPUaaS) platform, along with Platform-as-a-Service (PaaS) and AI capabilities within GTPL Broadband’s network, enabling businesses and consumers to access high-performance computing and AI-driven services locally.

Under the terms of the LOI, Radian Arc will deploy its GPU edge infrastructure within GTPL Broadband’s network, with the collaboration focused on providing innovative solutions to universities, governments and enterprises enhancing their ability to leverage cutting-edge technology for growth and efficiency. The LOI marks the beginning of a three-year collaboration.

Siddharthsinh Vaghela, Head of Enterprise Business at GTPL Broadband Pvt. Ltd, said:

“This partnership aligns with our vision to provide world-class technology solutions to our customers. By integrating Radian Arc’s GPU edge infrastructure now backed by Submer’s full-stack capabilities into our network, we aim to deliver transformative AI and compute services that meet the evolving needs of enterprises and consumers across India.”

Key Highlights of the Partnership:

  • Full-Stack Backing: As part of Submer and InferX, the GTPL deployment benefits from Submer’s end-to-end AI infrastructure capabilities – including liquid cooling, datacenter design and build, IT services and a land and power portfolio spanning the UK, USA, India and the Middle East.
  • Infrastructure Deployment: GTPL Broadband will invest in GPU-based points-of-presence, including CPUs, switches and associated hardware.
  • Joint Sales Efforts: Both companies will collaborate to offer DaaS and GPUaaS services to universities, governments and enterprises.

David Cook, CEO at Radian Arc, commented:

“This partnership with GTPL Broadband demonstrates the strength of our platform and what becomes possible now that Radian Arc is part of Submer. As InferX, we can offer GTPL and their customers a truly differentiated proposition – sovereign GPU compute, deployed at the edge within their network, backed by a full-stack AI infrastructure platform with world-class cooling, datacenter and cloud capabilities. It’s a powerful combination, and India is an incredibly exciting market for us.”

For more information, get in touch with us here.

The post Radian Arc, a Submer company, and GTPL Broadband sign Letter of Intent to deploy GPU-as-a-Service and AI solutions across India appeared first on Submer.

]]>
Submer acquires Radian Arc to provide full-stack AI infrastructure, from core datacenters to edge compute https://submer.com/blog/press/radian-arc-acquisition-to-provide-full-stack-ai-infrastructure-from-core-datacenters-to-edge-compute/ Tue, 10 Feb 2026 07:05:14 +0000 https://submer.com/?p=27147 Submer acquires Radian Arc Operations Pty Ltd, the established provider of an infrastructure-as-a-service (IaaS) platform for running sovereign, telco-focused GPU cloud services.

The post Submer acquires Radian Arc to provide full-stack AI infrastructure, from core datacenters to edge compute appeared first on Submer.

]]>
Barcelona, Spain – February 10, 2026 – Submer, the market leading AI infrastructure provider, has today announced that it will acquire Radian Arc Operations Pty Ltd, an established provider of an infrastructure-as-a-service (IaaS) platform for running sovereign, telco-focused GPU cloud services.

The acquisition completes Submer’s full-stack cloud offering, bringing together InferX, Submer’s NVIDIA Cloud Partner (NCP) platform launched earlier this year, with Radian Arc’s carrier-embedded GPU edge computing platform. With Radian Arc deployed across 70+ telecom and edge compute customers globally and thousands of GPUs in operation, Submer’s combined footprint spans North America, Europe, the UK, India, the Middle East, and Asia-Pacific.

Radian Arc’s platform is widely used by telecoms companies inside their networks, to support cloud gaming and AI workloads, which demand both high performance and low latency. The platform’s ability to embed AI infrastructure delivers true data sovereignty, processing data in-country, within local infrastructure, over telco billing and data systems.

Patrick Smets, CEO at Submer, said:

“This acquisition of Radian Arc completes our full-stack cloud infrastructure,” said Patrick Smets, CEO at Submer. “Bringing Radian Arc together with InferX, our AI operations and delivery platform, forms a dual-plane, sovereign, telco-focused cloud offering that is highly competitive in today’s AI datacenter market.”

David Cook, CEO at Radian Arc, commented:

“We have built our platform in close cooperation with our customers and partners, allowing us to develop a powerful model, that demonstrably works at scale. By joining Submer’s established partner ecosystem, we are now in a position to accelerate delivery of sovereign AI infrastructure faster and with lower latency to telecoms operators worldwide.”

The acquisition brings Submer a diversified, long-term customer base with real-world, monetizable use cases already in operation. Submer is building the AI factories of the future at speed and at scale, supporting core datacenters with edge compute to deliver a complete AI cloud solution. 

Smets added:

“Built on ten years of liquid cooling leadership, Submer has evolved into a full-stack AI datacenter provider, fully accountable from chip to operation. Joining forces with the RadianArc team and their edge compute platform is an exciting next step, further strengthening our position as the single accountable partner for end-to-end AI infrastructure.”

Submer’s full stack incorporates:

  • Access to significant land and power pipelines – exceeding 5GW across the UK, USA, India, and the Middle East through partner consortiums, enabling rapid deployment of next-generation AI infrastructure
  • A complete AI cloud business unit – combining InferX (NCP), Radian Arc’s sovereign and telco-grade edge cloud capabilities, and AI inference platforms, enabling scalable and monetisable AI workloads through partner-enabled ecosystems
  • End-to-end design and build capabilities – delivering turnkey, modular AI datacenters and supporting large-scale enterprise and hyperscale-class deployments across Europe and the United States
  • Deep IT and liquid cooling expertise – spanning in-house system design, installation, advanced liquid cooling and integrated AI compute platforms with networking and storage, delivered directly or through strategic partners

For more information, get in touch with us here.

The post Submer acquires Radian Arc to provide full-stack AI infrastructure, from core datacenters to edge compute appeared first on Submer.

]]>
Submer and Anant Raj partner to accelerate sovereign, AI-ready infrastructure across India https://submer.com/blog/press/partnering-with-anant-raj-part-to-accelerate-sovereign-ai-ready-infrastructure-across-india/ Sun, 08 Feb 2026 07:02:58 +0000 https://submer.com/?p=27150 Submer has announced a strategic collaboration with Anant Raj Cloud to develop fully operational AI-ready datacenters across India.

The post Submer and Anant Raj partner to accelerate sovereign, AI-ready infrastructure across India appeared first on Submer.

]]>
Barcelona, Spain – February 8, 2026Submer, the full-stack AI infrastructure provider, is today announcing a strategic collaboration with Anant Raj Cloud, a subsidiary of Anant Raj Limited  to develop fully operational AI-ready datacenters across India. The collaboration enables the rapid deployment of high-density, energy-efficient computing platforms designed to support sovereign and enterprise AI workloads at scale.

This partnership serves as a primary example of the EU-India Trade Deal in action, bringing world-class liquid-cooling technology from Spain and combining it with the robust infrastructure and operational support of India. Crucially, the collaboration strengthens a single, accountable source for AI datacenter design, build and scalable operation. Combined with Submer’s neocloud and inference platform, InferX, the partnership enables customers to access AI-ready infrastructure and compute through a unified, end-to-end model aligned with India’s sovereignty requirements.

Anant Raj plays a foundational role in India’s AI ecosystem by developing the physical infrastructure on which AI runs. With campuses in Manesar and Panchkula, Haryana, the company is expanding from traditional colocation and cloud services into utility-grade AI infrastructure designed for high-density, GPU-intensive workloads.

The announcement comes at a transformative moment for the industry, with the Union Budget 2026-27 laying a strong foundation for the AI datacenter and semiconductor ecosystem. The business initiatives are set to significantly boost global cloud and AI DC investments in India.

Patrick Smets, CEO at Submer, said:

“India is at a pivotal moment in its digital transformation. By combining Submer’s modular datacenter infrastructure, liquid cooling technologies and prefabricated MEP systems with Anant Raj’s existing Data Center Infrastructure & Cloud Services and campus development capabilities, we bring high-performance AI compute online fast while significantly reducing environmental impact.”

Dev Tyagi, President of UKI, India, and Asia at Submer, said:

“This partnership supports far higher computing capacity in the same physical footprint and establishes a blueprint for industrialized, AI application ready datacenters  that can be replicated across India. This creates a sovereign and sustainable path for AI adoption with speed and scale.”

Mr. Amit Sarin, Managing Director, Anant Raj Limited, said:

“We are proud to lead AI adoption in India. Partnering with Submer and InferX to deliver sustainable AI datacenter and cloud services at speed provides the perfect solution to support our economic growth. This collaboration expands access to high-performance computing while advancing India’s AI sovereignty goals and nurturing a scalable, homegrown ecosystem.  Our investments are firmly anchored in the Hon’ble Prime Minister’s vision of Digital India, and Atmanirbhar Bharat.”

The Memorandum of Understanding is signed and will be exchanged at Anant Raj’s booth at the India AI Impact Summit 2026, reinforcing the company’s position as a key enabler of India’s AI ecosystem. By showcasing sustainable, AI-ready infrastructure at the summit, Submer and Anant Raj are supporting the transition from vision to action while advancing the summit’s “Planet” and “Progress” principles – ensuring AI growth is both environmentally responsible and broadly accessible.

For more information, get in touch with us here.

The post Submer and Anant Raj partner to accelerate sovereign, AI-ready infrastructure across India appeared first on Submer.

]]>
AI Datacenter trends shaping 2026 https://submer.com/blog/ai-datacenter-trends-shaping-2026/ Fri, 06 Feb 2026 15:02:33 +0000 https://submer.com/?p=27120 What are the predictions for datacenters during 2026? How is AI infrastructure shaping the industry forward? Let's explore the key AI datacenter trends to watch in 2026.

The post AI Datacenter trends shaping 2026 appeared first on Submer.

]]>
The past few years have been dominated by AI – the power of AI, the need for AI, the future of AI, etc. While many of us have started to recognize and tune out the hyperbole, that hasn’t diminished the fact that AI is, and will continue to be a hugely important topic, tool and technology.

But as businesses and the world at large race to embrace, develop and implement their AI strategies, the need for high-performance compute at scale has never been higher. This year will be a race with no finish line in sight – a race to deliver the infrastructure that the world needs to keep pushing the AI envelope.

Key AI datacenter trends to watch in 2026

So, what does that mean for datacenters and their value chain? There are some key trends that will both dictate direction and drive the industry forward. These are the key AI datacenter trends shaping 2026.

The need for speed, through modular design

The technology industry has always moved fast, but the need and unprecedented demand for AI has resulted in a similarly unprecedented demand for datacenter infrastructure to deliver it. This year we’ll be seeing the time from design to deployment for new datacenters shorten considerably.

Several key factors will contribute to this expedited deployment. Standardised and reference designs for AI datacenters will significantly reduce planning stages, while modular construction will allow for faster build times and scalability. Also, a single source datacenter infrastructure partner like Submer, with accountability over the entire supply chain, can better guarantee timelines and reduce risk across the entire process.

One of our goals for 2026 is to deploy AI datacenters faster than ever and keep our customers ahead in the AI race.

Brownfield datacenters gain momentum in the AI era

A greenfield datacenter build will benefit from complete flexibility of design and construction, along with the potential to bake in headroom for expansion and scaling. But ensuring that a greenfield site has access to the necessary utility and network infrastructure – power, water, fiber, etc. – is a major consideration. There’s also the challenge of obtaining the required permits and dealing with any local objections to the proposed build.

By contrast, a brownfield datacenter build will inherit the existing infrastructure, already have the required permits, and will not be subject to local objections. While the scope of flexibility and scale may not be as extensive, a brownfield build can deliver something even more valuable – deployment speed. Moreover, many existing locations run inefficiently, with high PUEs and large overhead costs. Retrofitting such datacenters come with the added benefit of unlocking extra room for compute. In 2026 where the race to deliver more AI compute is paramount, going brown can help deliver that need for speed.

The productization of datacenters

Traditionally, datacenters have been seen as unique projects, requiring significant planning, development, build and deployment cycles. The custom nature of datacenters contributed to the scale and complexity of the project, and the deployment time associated with it. However, a distinct change is happening in our industry, a move where datacenters are no longer classified as projects and rather seen as products.

This productization of datacenters will be driven by defined reference designs and modular construction, allowing standardised datacenter products to be rolled out significantly faster and more cost effectively.

Liquid cooling, an essential for high-density AI infrastructure

Heat has always been a challenge when designing and deploying datacenters, but ever-more powerful high-density compute required by AI workloads demands a level of cooling far beyond the traditional air-cooled solutions.

In 2026 liquid cooling is no longer optional, it’s imperative. High-density racks demand liquid cooling solutions to ensure optimal performance and resilience. Today, any datacenter partner must have experience and expertise in liquid cooling – both Direct Liquid Cooling and immersion – to build out effective AI infrastructure.

With its unrivalled experience in liquid cooling, Submer is ideally placed to develop the datacenter infrastructure that the age of AI demands.

The era of Edge

The age of AI has brought with it the need for large scale, high-performance datacenters, or AI factories. But while these huge compute palaces are integral to delivering AI and HPC workloads, there is also a clear need for more localised, low latency data processing.

We’ll undoubtedly be seeing more need for and investment at the Edge, bringing compute closer to where the data is being captured, and removing the latency that would otherwise be introduced by utilising a datacenter in the cloud. Whether it’s the explosion of IoT devices and sensors, or the implementation of autonomous vehicles, the need to gather and process data with as little delay as possible will drive more compute to the edge, creating an edge-to-core datacenter model.

While it’s the massive AI factories grabbing all the headlines and exposure right now, Edge compute will become a major focus in 2026.

Power strategy and AI growth zones reshape datacenter planning

While brownfield datacenter builds can circumvent some of the early stage challenges, such as utility infrastructure and permits, there will still be a need for new greenfield sites. And when you’re working with a blank canvas, ensuring there is adequate power for the planned build is crucial.

With governments pushing for both AI investment and AI sovereignty, we could see incentives to encourage the construction of AI factories in certain territories. The UK, for example, has already announced a plan for AI Growth Zones, ensuring adequate power and fast-tracked planning for datacenter builds in these areas. We can also expect to see more use of alternative power to facilitate datacenter needs – solar, wind, hydro, etc. Not only can alternative energy make an otherwise unsuitable site viable, it can also improve the sustainability goals of the build.

Supercharged silicon, driving higher density AI compute

Even with the slowdown of Moore’s Law, it’s impossible to deny the exponential increase in compute power over time, and that performance will only continue to increase in 2026. As fabrication processes continue to shrink , the core density and performance of AI superchips will keep rising.

Put simply, the more transistors that can be packed into each silicon chip (manufactured from a single silicon wafer), the less physical space will be required to deliver a defined level of performance. Increased performance per chip equates to more performance per rack, and exponentially increased performance across an entire datacenter.

For anyone who has worked in high performance computing for a while, the speed of progress and increased performance delivered by each new generation of chip isn’t surprising. But with workloads becoming ever-more complex, the need for more powerful, higher density compute has never been greater.

2026 has only just begun, but it’s already clear that some of the AI datacenter trends above will shape how datacenters evolve over the coming months and beyond. One thing is certain, the need for datacenters and cloud infrastructure is only going to increase, but the efficiency and sustainability of that infrastructure must keep pace.

The post AI Datacenter trends shaping 2026 appeared first on Submer.

]]>
Driving the Age of Intelligence. See it unfold in Hawaii https://submer.com/blog/events/driving-the-age-of-intelligence-see-it-unfold-in-hawaii/ Fri, 02 Jan 2026 09:02:33 +0000 https://submer.com/?p=27094 With CleanSpark, we’re building an integrated infrastructure platform combining sustainable power, modular liquid-cooled data centers, and high-performance AI environments. Meet us at PTC Hawaii to see how our partnership can benefit you.

The post Driving the Age of Intelligence. See it unfold in Hawaii appeared first on Submer.

]]>
The AI era is here, and it demands a fundamental change in the delivery of scaled global compute. The demand for power is insatiable, density is accelerating, and performance requirements are changing. In 2026, the question isn’t if you’ll scale intelligent compute; it’s how you do it responsibly, efficiently, and profitably. At PTC’26 in Honolulu (18–21 January 2026), two trusted partners, CleanSpark and Submer, are unveiling a collaborative integrated ecosystem opportunity that spans power and infrastructure. If you’re planning next-gen capacity, this is your launch pad. 

Power at scale: CleanSpark’s gigawatt advantage 

The foundation of any AI datacenter strategy is reliable power and land. CleanSpark has a portfolio of more than 1 GW in contracted power and delivered infrastructure and over 2 GW in the immediate pipeline, combining land, power, and delivery to develop AI-ready campuses faster and more efficiently than traditional models. It’s a platform specifically oriented to meet the surge in intelligent compute. The emerging AI factories need industrialized datacenter development to deliver on time and on budget.  

Liquid-cooled datacenters: Submer’s proven ecosystem 

Beyond secured power, the next challenge is cooling. Traditional air-cooling struggles at high density, sometimes consuming up to 40% of a facility’s total energy. Liquid cooling flips that paradigm. Submer’s solutions reduce cooling energy dramatically, lower fan power, and can reach PUE levels as low as ~1.03, with the added benefit of heat reuse. Just as critical: liquid cooling can be engineered for zero direct water consumption via dry cooling, adding geographic flexibility, while driving space efficiency, perfect for the AI clusters that define today’s workloads. 

The future of AI infrastructure starts in Hawaii 

AI growth is colliding with physical and economic limits: power availability, grid constraints, cooling efficiency, and latency. Addressing these factors can be challenging. By combining CleanSpark’s power + land and Submer’s liquid cooling solutions and Design & Build services, operators gain one ecosystem to plan, deploy, scale, and profit, sustainably. 

This isn’t just future-proof; it’s next-generation performance, delivered today. Approvals and certifications for liquid-cooled infrastructure designs are advancing; deployment models are modular and prefabricated. Our ecosystem takes datacenter construction out of the field and accelerates inside the factory. The outcome is a cleaner, denser, smarter datacenter that earns from day one. 

Join us at PTC’26 

PTC’s annual conference is where digital infrastructure leaders converge, C-level executives, technologists, investors, and academics to shape what’s next in ICT and data infrastructure. There’s no better stage to explore power-to-performance with the teams building it. Join us 18–21 January 2026 at the Hilton Hawaiian Village and step into the Age of Intelligence. 

Book time in our PTC’26 suite to explore sites, designs, and AI economics tailored to your roadmap. Secure your time now. Our team will coordinate a meeting to discuss what matter most to you and turn your next megawatt into a sustainable, monetized compute.

Looking forward to catching up!

The post Driving the Age of Intelligence. See it unfold in Hawaii appeared first on Submer.

]]>