RCRTech https://rcrtech.com News | Insights | Impact Mon, 16 Mar 2026 23:17:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 RCRTech News | Insights | Impact false Meta reveals aggressive roadmap for four new MTIA chips by 2027 https://rcrtech.com/semiconductor-news/meta-mtia-four-chips/ Christian de Looper]]> Mon, 16 Mar 2026 23:17:28 +0000 https://rcrtech.com/?p=429743 Meta is accelerating its MTIA program

In sum – what we know:

  • An aggressive roadmap – Meta plans to release four new generations of MTIA chips within two years: MTIA 300, 400, 450, and 500.
  • The shift to inference – Later generations prioritize generative AI inference costs over training, unlike mainstream chips adapted from training hardware.
  • Quick release cadence – Meta has developed the capacity for a six-month release cycle, significantly faster than the industry standard of one to two years.

Meta has laid out a roadmap for its upcoming MTIA chips, with a hefty four new chips set to roll out over the next two years. While it’s been clear that Meta is working hard on in-house silicon, this represents a pretty significant gear-change, and Meta has clearly come a long way from the experimental efforts it announced in 2023.

Here’s a look at the new chips and what makes them unique. The real question going forward is whether this aggressive push actually delivers the efficiency and cost improvements the company is counting on — and what it does to the relationship with suppliers like Nvidia.

The new chips

The new chips span workflows, and arc from specialized to general-purpose. The first, called the MTIA 300, is already in production, and is purpose-built for ranking and recommendations training — the bread and butter of Meta’s ad and content systems. It’s the most narrowly scoped chip in the lineup and represents where Meta’s custom silicon efforts stand today.

Things get considerably more ambitious from there. The MTIA 400 is the first chip designed to handle all workloads, with generative AI inference as its primary target. The MTIA 450 follows the same all-workload philosophy but pushes further on gen AI inference optimization, with ranking, recommendations, and training as secondary priorities. The MTIA 500 closes out the roadmap with the same optimization focus as the 450, presumably bringing additional performance or efficiency improvements.

The 400, 450, and 500 series are slated to roll out by the end of 2027, with all four generations expected to be running in production by the end of that window. That is an extraordinary volume of silicon to bring online in such a compressed timeframe, and it’ll be worth watching closely whether Meta can sustain quality and reliability while moving that fast.

Technical architecture

Modularity is a defining principle to Meta’s efforts. New MTIA generations are engineered to slot into Meta’s existing rack infrastructure without forcing full system redesigns, which obviously makes sense given how quickly new chips are being announced. On top of that kind of modularity, it’s also worth noting that the chips are built on a chiplet architecture — the MTIA 300, for instance, comprises one compute chiplet, two network chiplets, and several HBM stacks. Each compute chiplet houses a grid of processing elements, or PEs, with some redundancy baked in to improve yield. Each PE packs two RISC-V vector cores, a dot product engine for matrix multiplication, a special function unit for activations, a reduction engine for inter-PE communication, and a DMA engine for moving data in and out of local scratch memory.

The MTIA 400 doubles compute density by combining two compute chiplets and introduces support for enhanced MX8 and MX4 low-precision formats important for efficient gen AI inference. A rack of 72 MTIA 400 devices, connected via a switched backplane, forms a single scale-up domain. The MTIA 450 then doubles HBM bandwidth over the 400 — jumping from 9.2 TB/s to 18.4 TB/s — while boosting MX4 FLOPS by 75% and adding hardware acceleration to alleviate Softmax and FlashAttention bottlenecks. The MTIA 500 pushes the modular philosophy even further, adopting a 2×2 configuration of smaller compute chiplets surrounded by HBM stacks, two network chiplets, and an SoC chiplet providing PCIe connectivity to the host CPU and scale-out NICs. It bumps HBM bandwidth another 50% to 27.6 TB/s and offers up to 512 GB of HBM capacity.

From MTIA 300 to MTIA 500, Meta says HBM bandwidth increases by 4.5x and compute FLOPS jumps 25x when comparing MX4 performance across the lineup. For context, the MTIA 400 already hits 12 PFLOPs in MX4, and the MTIA 500 reaches 30 PFLOPs — alongside 5 PFLOPs in BF16. Those are numbers that put it in the neighborhood of leading commercial accelerators, though independent validation is still lacking.

On the software side, the chips are built on widely adopted standards, including PyTorch, vLLM, Triton, and Open Compute Project specifications. That’s a notable choice because it signals Meta isn’t trying to build a fully proprietary, walled-off ecosystem. Since PyTorch originated at Meta, the MTIA stack takes a PyTorch-native approach — developers can use torch.compile and torch.export to capture and optimize model graphs without any MTIA-specific rewrites, and models can run on both GPUs and MTIA simultaneously. Under the hood, MTIA-specific compilers built on Torch FX IR, TorchInductor, Triton, MLIR, and LLVM translate those graphs into optimized device code. Meta has also integrated vLLM support through a plugin architecture, replacing key operators like FlashAttention and fused LayerNorm with MTIA-specific kernels and inheriting features like prefill-decode disaggregation and continuous batching. The runtime, notably, uses a Rust-based user-space driver rather than a traditional in-kernel Linux driver, with firmware written in bare-metal Rust for low latency and built-in memory safety.

Aligning with common frameworks makes it straightforward to move workloads between custom and third-party hardware, which is a smart hedge given the inherent risks of any custom silicon program. Meta says these chips deliver better compute and cost efficiency than general-purpose silicon for its specific ranking, recommendation, and content delivery workloads, though independent benchmarks to back those claims are still hard to come by.

The inference strategy

Strip away the details and the MTIA push boils down to reducing dependence on external GPU suppliers, though that has long been the case with in-house silicon. Nvidia dominates the AI hardware market, and supply constraints have been a persistent headache for every major tech company trying to scale AI infrastructure. By shifting massive inference workloads onto custom chips, Meta gets to control its own infrastructure margins and drive down cost-per-workload without being at the mercy of someone else’s production schedules or pricing.

Meta, however, frames this as a “portfolio approach” rather than a full replacement of third-party hardware. The company acknowledges no single chip covers all its needs and says it’ll keep sourcing from multiple suppliers. MTIA sits at the center of the strategy, but it’s not the only chip in the company’s stack. Still, the direction is clear — Meta wants far more control over its silicon destiny.

The rapid iteration cycles the company is targeting — six months or less between generations — are built around the idea that AI techniques and workload demands can shift dramatically in a matter of months. Being able to iterate on hardware faster than traditional semiconductor timelines could be a genuine edge. But faster cycles also mean more surface area for things to break, and the industry’s longer timelines exist for good reason. After all, chip design and fabrication are extraordinarily complex.

Meta’s announcement fits into an accelerating industry-wide shift toward proprietary AI infrastructure. Google has TPUs, Amazon has Trainium and Inferentia, Microsoft has Maia. MTIA’s expansion slots right into that trend. Meta, however, does still have a multi-billion-dollar relationship with Nvidia. For now, Meta appears to be threading the needle by running both tracks simultaneously. As MTIA scales, though, the balance of that relationship will inevitably shift.

]]>
Promise and peril – in every AI play https://rcrtech.com/rcrwirelessnews/promise-and-peril-in-every-ai-play/ James Blackman]]> Mon, 16 Mar 2026 18:16:25 +0000 https://rcrtech.com/?p=429735

There’s something in here about the promise and peril of this whole AI story, and of quite how critical infrastructure, including telco networks, has become. The lead story – a long interview with Microsoft from MWC – is a good measure of the market’s progress: a handful of ‘frontier’ firms embedding hundreds of agents across their operations, and a killer use case with Vodafone about shortening RFI/RFP response times for enterprise clients from weeks to minutes. That is a lot of freed-time, right there – and a clear ROI, assuming the time savings can be measured in money terms. 

 

But all of this has to work, or the house comes down. Telcos need AI to work – for new growth, value, relevance. As we know. So do data center companies, and everyone associated with the AI build-out – or so they will, at some point, once the ‘factories’ are built and the models are trained, and AI does some proper work inside enterprises. But so does every industry, actually – for all the same reasons. In the end, AI needs AI to work – if its promise is not to go pop. There is a lot riding on this, as we see; it is a fine balancing act.

 

Meta is deepening its AI investments and cutting its workforce at the same time – which is about cost pressures, talent realignment, and probably total economic overhaul. External shocks, just reported today, show how politics and infrastructure are now inseparable from AI strategy – global geopolitics, zeroed in on the Middle East, have stalled the biggest submarine cable project in the world; France is preparing a nuclear power push for sovereign AI data centers; Telia has a big sovereign AI project of its own in Sweden. 

 

This is global and critical, and it all feels dicey. Even Microsoft’s hopeful AI hit parade with Vodafone et al is almost upended by its talk at the end about the challenge to make the tech scale. Enterprises are “horrified” right now, it says, that AI agents will swarm and riot. How can they be identified, managed, controlled? If not sorted, fragmented data, inconsistent semantics, and weak governance will trap some AI deployments in pilot purgatory (best case), and, at worst, erode trust in the whole AI economy – hurting those who built it, and also those who bought it.  

 

It starts to sound like IoT. Except there are two differences. One: the whole world is watching this one; more than that, knowingly or not, everyone is invested in it. AI is totally critical – in terms of top-line economic money-flow today, and in terms of bottom-line political and economic power tomorrow. Two: the hyperscalers are in charge; or at least their models, about scalable elasticity, represent the commercial pole, and architectural north star. Of course, the hyperscalers messed up (kind of) with IoT, and (some) telcos have failed with private 5G – critical enabling tech, too.


So, the momentum is undeniable, but so are the risks. But AI is bigger and badder (interpret as you please), and just much easier to know. Therefore, right now, writing this, logic seems to say that the governance and scale challenges will be managed – thanks also to open networking initiatives in telecoms. Data will be controlled and orchestrated between countries, enterprises, technologies, and AI will be used for… everything (have mercy). Happy Monday, everyone.

James-Newsletter-rh9kgqsb2fng5qdzlyiq8g8ux9jouqcjh68bxs270g-10-rjao7r1w4lop2svtzn812vsuqvbme6iqfireyhnqps

James Blackman
Executive Editor
RCR Wireless News

RCR Top Stories

Agility is money: ‘Frontier’ telcos like AT&T and Vodafone are deploying hundreds of AI agents across the board, says Microsoft. It details a weeks–to-minutes use-case with the latter. But telcos also have three big problems in the AI race.

France’s nuclear bet: France is to leverage its large nuclear power capacity to support the expansion of AI data centers, arguing that stable, low-carbon electricity could give the country an advantage in global AI infrastructure development.

2Africa cable halted: Construction of the massive 2Africa subsea project has halted in the Persian Gulf as local conflict disrupts shipping routes, delaying a key connection to expand internet capacity between Africa, Asia, and the Middle East.

AI service assurance: Telcos are deploying AI in network service assurance, particularly for root cause analysis, anomaly detection, and customer analytics. These tools help as networks grow more complex and automation advances.

eSIM telco growth: The eSIM is becoming the default for phones, glasses, watches, and other devices, which raises an important question, says Motive: are operators ready to activate the next generation of digital services at scale?

Logos SMCI NVIDIA 2021 2400x700 1 1 1
In partnership with

AI-Powered Telecom Infrastructure
Supermicro, in collaboration with NVIDIA, delivers AI-powered infrastructure tailored for telcos, enhancing operational efficiency, network management, and customer experiences. Explore now 

Beyond the Headlines

Meta plans layoffs: Meta Platforms is considering large layoffs – potentially 20% of its workforce – to help offset rising AI infrastructure costs, even as it deepens AI investments on the back of a major deal with AI cloud provider Nebius.

AI DC in Tuas: Nxera has opened DC Tuas in Singapore, a 58MW data center built for AI workloads with hybrid cooling and high rack densities – as demand for AI-ready infrastructure continues to accelerate.

Agents of chaos: Enterprises are scrambling as AI workloads splinter across the cloud-edge landspace. Equinix reckons the answer is in neutral interconnection hubs to orchestrate distributed infrastructure, and bring inference closer.

Verizon’s AI nets: Global, programmable, and dense – from cloud to edge: Verizon talked at MWC about how its sees the AI stack evolving for telcos, and why its investments in backbone fiber, metro access, and private networks will win.

100 billion agents: Huawei says AI agents will transform mobile networks, driving demand for higher uplink capacity, real-time connectivity, and new service standards as the industry shifts toward infrastructure specifically designed for AI.

What We're Reading

Telia’s sovereign AI: Telia and Brookfield have partnered on Sweden’s largest sovereign AI initiative. Backed by a SEK95 billion investment, Telia will connect facilities to its fiber network and deliver sovereign cloud services for enterprises.

Cloud boosts telco: Cloud providers boosted telecom equipment demand in 2025, helping the market return to growth after two years of decline. Higher investment in data center networks, including optical transport and routing gear, is the reason.

New Meta chips: Meta has unveiled a roadmap for four generations of its in-house Meta Training and Inference Accelerator (MTIA) AI chips to scale AI services across its platforms. The chips are designed to improve efficiency and reduce costs.

Nokia’s AI optics: Nokia has a new suite of application‑optimized optical transport solutions for AI networks, offering coherent optical products and multi‑fiber amplifiers that boost performance, and lower total cost of ownership.

Keysight test lab: Salience Labs and Keysight are to develop the first testing environment for optical circuit switches. The platform will support validation of photonic switching technology for improving data centre network scalability.

Events

Virtual Program
Explore the technologies, tools, strategies and partnerships powering the next generation of intelligent systems. This is where the backbone of AI innovation takes center stage. Register now 
 
Wi-Fi Forum, January 20th 2026
Join this RCR Wireless News‘ event to understand the current state of the Wi-Fi as we examine a myriad of evolving use cases and monetization strategies being deployed by industry. Register now 

Industry Resources

Webinar, September 18th
The journey to a fully autonomous network – The evolution of network automation and how Amdocs is leading the way

]]>
Meta layoffs anticipated as it inks deal with Nebius https://rcrtech.com/ai-infrastructure/meta-layoffs-anticipated-as-it-inks-deal-with-nebius/ Susana Schwartz]]> Mon, 16 Mar 2026 14:31:51 +0000 https://rcrtech.com/?p=429728

Meta layoffs anticipated as it inks deal with Nebius

256577725_612381820192785_1516860531882870200_n

Meta is up 3% this morning after Reuters reported the company is weighing layoffs for at least 20% of its workforce to offset $135 billion in AI infra spend for 2026. Almost simultaneously, news spread that Meta inked a $27 billion infrastructure deal with neocloud provider Nebius – a deal that includes $12 billion of dedicated capacity and up to $15 billion of additional available compute capacity over a five year period. At 15,000-16,000 job cuts, this would be Meta’s largest since its 2022–2023 “year of efficiency,” which eliminated over 21,000 roles.

 

Other hyperscalers to follow suit? In 2026, Meta, Amazon, Alphabet, Microsoft, and Oracle have collectively committed between $660 billion and $690 billion to AI-related Capex, and to offset it, they aim to operate with smaller teams and AI-driven efficiencies.

 

Some further reductions rumored to be pending include: Oracle’s possible 18% workforce cut to fund approximately $50 billion in AI data center expansion; Microsoft’s 5-10% workforce reduction for its $80 billion+ AI build-out;  Alphabet’s ongoing elimination of about one-third of managers overseeing small teams to fund its $175–$185 billion AI budget. Others that have touted AI automation as a driver of large-scale role displacement include Salesforce’s recent slashing of about 5,000 jobs, fintech company Block’s (formerly Square) layoff of approximately 4,000 people, and other recent announcements from UPS, Citigroup, Dell, Intel, TCS, Atlassian and others. While layoffs in the tech sector are nothing new, the scale and character of the layoffs in the AI era are different. Companies aren’t laying off thousands of people because they are financially struggling, but rather, because they are racing with one another to fuel massive AI infrastructure builds. RCRTech will continue reporting on this issue.  Be sure to read other RCR news, below in “Top Stories” and “What You Need to Know.”

Susana 2

Susana Schwartz
Technology Editor
RCRTech

AI Infrastructure Top Stories

SoftBank’s Telco AI Cloud: The combination GPU data centers, AI-RAN-powered Multi-access Edge Computing (MEC), and Infrinia AI Cloud OS to dynamically allocate resources between telecom and AI workloads based on real-time demand.

Cloud needs the network: CEO Pietro Labriola of TIM talked of future AI applications such as drones and their need for a greater focus on latency, uplink performance, and network capabilities rather than raw download speed.

France touts “nuclear” for AI infra: Speaking at the World Nuclear Energy Summit, French president Emmanuel Macron said the country intends to rely on its nuclear energy capacity to support the rapid expansion of energy-intensive DCs.

AI Today: What You Need to Know

Meta layoffs: Meta, up 3% this morning after Reuters reported the company is weighing layoffs for at least 20% of its workforce, is attempting to offset rising AI infrastructure costs, to the tune of $135 billion in 2026. 

Strait of Hormuz and AI chips: Helium is critical to chip manufacturing, and the war in Iran has sparked fears of a “helium crisis,” with drone strikes shutting down Qatar’s Ras Laffan helium hub, which provides roughly 30% of the global supply.

Tower semi and Oriole deal: Nanosecond optical circuit switching is the name of the game with Tower Semiconductor Teams and Oriole, which are teaming up to deliver ultra-low, deterministic-latency networking for scale-up and scale-out AI.

Google – Accel AI accelerators: In more than 4k applications for the joint AI accelerator for India startups run by Google and venture firm Accel, “wrapper” ideas dominated, though none of them were among the 5 startups for the latest cohort.

Cerebras – Oracle partnership: On an analyst conference call, Oracle namedropped AI chip startup Cerebras as a partner, as well as Nvidia and AMD. This could signal a broadening of the ecosystem amid chip shortage.

New Micron plant in Taiwan: Micron has completed its acquisition of Powerchip Semiconductor Manufacturing Corporation’s (PSMC) P5 site in Tongluo, Miaoli County, Taiwan. In addition to 300k sq ft, Micron plans an additional 270k sq ft.

Upcoming Events

This one-day virtual event will discuss the critical issues and challenges impacting the AI infrastructure ecosystem, examining the growth and evolution of the AI ecosystem as it scales and the need for flexible, sustainable solutions. 

Industry Resources

]]>
Nxera launches AI-ready DC as high-density demand rises https://rcrtech.com/ai-infrastructure-news/nxera-ai-dc/ Juan Pedro Tomás]]> Mon, 16 Mar 2026 14:06:24 +0000 https://rcrtech.com/?p=429725 Nxera CEO Bill Chang told RCR Wireless News that the facility was designed to support a wide range of computing needs, including traditional enterprise applications as well as more demanding AI and high-performance computing deployments

In sum – what to know:

AI demand drives early commitments – More than 90% of DC Tuas capacity was secured before launch, reflecting strong demand for AI-ready infrastructure in Singapore.

Hybrid cooling for AI workloads – The facility combines air cooling and direct-to-chip liquid cooling to support rising power densities from AI and high-performance computing deployments.

High-density AI infrastructure – DC Tuas supports rack densities of up to 200 kW and expands Nxera’s Singapore capacity to 120 MW.

Strong demand for AI-ready infrastructure helped drive early commitments at data center firm Nxera’s new DC Tuas facility in Singapore, with more than 90% of its capacity secured before the site officially opened, Nxera CEO Bill Chang told RCR Wireless News.

Nexera is the regional data center arm of Singtel Group. The new facility represents Nxera’s largest data center in Singapore and expands the company’s local capacity as demand grows for infrastructure capable of supporting artificial intelligence workloads. The site delivers 58 megawatts of AI-ready capacity and increases Nxera’s total data center footprint in the country to 120 MW.

According to Chang, the facility was designed to support a wide range of computing needs, including traditional enterprise applications as well as more demanding AI and high-performance computing deployments.

“DC Tuas is purpose-built to be able to support a broad spectrum of workloads – from conventional enterprise and cloud applications to next-generation AI and high-performance computing,” Chang said.

The eight-storey facility spans approximately 120,000 square feet and was built to accommodate the higher rack densities required by advanced AI infrastructure. As a carrier-neutral, multi-tenant facility, it allows multiple operators and enterprises to deploy computing resources within the same site.

Chang explained that traditional enterprise and cloud applications typically require less power density and can be supported using conventional cooling technologies.

“Traditional enterprise or cloud workloads typically operate at lower power densities and can be supported through air cooling,” he said.

“In contrast, advanced AI workloads require significantly higher power and heat densities, which call for liquid cooling solutions,” Chang said.

To address this shift, DC Tuas was designed with hybrid cooling capabilities that combine air cooling with direct-to-chip liquid cooling. This approach allows operators to run both conventional workloads and more intensive AI systems within the same facility.

“DC Tuas’ hybrid cooling capabilities offer customers the flexibility to deploy today’s workloads while future-proofing their infrastructure as AI requirements continue to evolve,” Chang said.

Nxera says the facility hosts Singapore’s largest deployment of direct-to-chip liquid cooling within a multi-tenant data center. The technology is designed to remove heat more efficiently from high-performance servers while reducing energy and water consumption compared with conventional cooling systems.

Chang said the company expects computing demands to continue increasing as artificial intelligence models become more complex and computationally intensive.

“As AI adoption accelerates and models become more compute-intensive, power density requirements are expected to continue rising,” he said.

In response to this trend, liquid cooling technologies will become increasingly important in supporting large AI clusters.

“Liquid cooling technologies would thus need to be able to manage higher thermal loads more effectively and efficiently,” Chang said.

The DC Tuas facility was designed to accommodate this trajectory, enabling operators to increase rack power densities as AI infrastructure evolves.

“DC Tuas has been built with this trajectory in mind, enabling it to accommodate increasing power densities over time and support customers as their AI infrastructure and needs grow,” Chang said.

Chang added that facilities designed for efficiency and high-density computing are particularly important in Singapore, where space and resources are limited.

The facility also forms part of the broader digital infrastructure platform operated by Nxera’s parent company, Singtel. The site is integrated with a cable landing station that provides direct access to both domestic and international networks, enabling lower latency connectivity and improved network performance for customers operating large-scale computing workloads.

]]>
France looks to nuclear power to support AIDC growth https://rcrtech.com/ai-infrastructure-news/france-nuclear-aidc/ Juan Pedro Tomás]]> Mon, 16 Mar 2026 13:58:28 +0000 https://rcrtech.com/?p=429723 The French government sees nuclear power as a stable and low-emission energy source capable of supporting the significant electricity requirements associated with AI systems

In sum – what to know:

Nuclear power to back AI growth – France plans to use its large nuclear energy capacity to support the expansion of AI data centers requiring vast amounts of electricity.

Surplus low-carbon electricity available – The country exported about 90 TWh of decarbonized electricity last year, giving it room to expand computing infrastructure.

AI investment strategy – France hopes its stable nuclear power supply will attract companies building large AI data centers and high-performance computing facilities.

French president Emmanuel Macron said the country intends to rely on its nuclear energy capacity to support the rapid expansion of artificial intelligence infrastructure, particularly energy-intensive data centers.

Speaking at the World Nuclear Energy Summit in Paris, Macron said the country’s extensive nuclear power system gives it a strategic advantage as demand grows for the computing resources required to run advanced AI models.

According to Macron, France’s existing energy mix could help sustain the growing electricity needs of large-scale computing infrastructure. Nuclear power plants already provide the majority of the country’s electricity and produce a substantial volume of low-carbon power that can be directed toward digital infrastructure.

The French president also noted that the country exported roughly 90 terawatt hours of decarbonized electricity last year, most of it generated by nuclear facilities. This surplus capacity, he said, creates an opportunity for France to host additional data centers without putting pressure on domestic electricity supply.

The French government sees nuclear power as a stable and low-emission energy source capable of supporting the significant electricity requirements associated with AI systems. Training and operating large AI models requires huge computing power, which in turn drives high levels of energy consumption in the data centers that run them.

As demand for AI infrastructure accelerates globally, governments and technology companies are increasingly searching for locations that can provide reliable, large-scale electricity supply. Macron said France’s nuclear fleet places the country in a strong position in the global competition to attract investment in artificial intelligence infrastructure.

By combining its existing supply of decarbonized electricity with new investments in computing capacity, France hopes to attract technology companies seeking locations for large AI-focused data center campuses. The strategy forms part of a broader effort by the French government to position the country as a major hub for artificial intelligence development.

RCR Tech recently spoke with two experts who explained the challenges of using nuclear power in AI data centers.

]]>
Telco to tech-co to AI-co (get on with it) https://rcrtech.com/rcrwirelessnews/telco-to-tech-co-to-ai-co/ James Blackman]]> Fri, 13 Mar 2026 17:10:46 +0000 https://rcrtech.com/?p=429715

Carol, let’s play Countdown – and take four from the top and three from the bottom, and do some dodgy maths. (If you’ve lost us – because you’re not British, or you’re under 45 – then hold tight; we’ll get through. Actually, let’s take all five from the top of today’s stack – or from any number of over the past few years. Because the story is always the same, right – about telcos becoming tech-cos. Yawn. If I had a dollar for every… But here we are, early 2026, post-MWC, and every operator and every vendor has the same line about transformation. 

 

The only difference, versus 2025 or 2024, is that AI is real – or real in the data center, and almost-real outside of it (at the edge, in the hands of enterprises). So now everyone – Huawei, Softbank, TIM, True, as reported below; plus Spirent / Keysight, in a memo to telcos (which makes five) – wants to be some kind of an AI-co, basically. AI-native, ideally; AI-ready, otherwise. Same as all the enterprises they want to sell AI services to on AI networks. All of them have pillars and strategies. Softbank wants to be a “next-gen social infrastructure”.

 

Thai operator True is following an “AI-first telco-tech model”, apparently. TIM in Italy has a line about how the cloud needs the network and AI needs the cloud, and, so… what? Data has to be delivered? Well, who knew? Huawei reckons there will be a hundred billion agents – at some point, on some (presumably all) networks. Apologies; it’s just we’ve heard this before. Telecoms wants a shot in the arm; it wants new ideas and growth; it wants to be cool again. It needs to change. 

 

And Softbank has more to say, actually; it will embed AI infra inside its network to build a structural cloud edge for latency, reliability, sovereignty – for 100 billion agents, an 80/20 shift from training to inference, and a line into AI-hungry enterprises. NTT Data, one from the bottom, is doing the same. Huawei says more too – about a fatter uplink (talked about in private 5G for donkey’s years), agent-to-agent telco chatter, and new KPIs for AI QoS. There’s something there, worth chasing: the real story. Telcos must stop talking about change, and make it.

 

If you do it, it’s done – as I tell my kids about their homework. Easy to say, of course. For telcos, it is a three-way calculation: deploy AI for operations, make networks for AI, try to own the AI customer. Which, to be fair, are stories that also get told. Which brings us to two more links at the bottom of the stack: one, that a bunch of optical vendors (Ciena, Coherent, Marvell etc) are to develop open specs for AI-integrated interconnect solutions; two, that Cisco has new pluggable coherent optics for scale-across networks. 


Both are about this interconnect piece, for distributed training workloads between data centers, which RCR has been harping on about all week. Anyway, point again is that this is where the action is in AI networks, right now. Operators can talk about (infrastructure) change all they want; at some point, soon (likely between their 5G and 6G upgrades), they will have to deliver. Those that have already changed-up their fiber systems will surely be in good stead. And then the biggest challenge of all: how to front-up to enterprises as an AI-co tech-co telco. 

James-Newsletter-rh9kgqsb2fng5qdzlyiq8g8ux9jouqcjh68bxs270g-10-rjao7r1w4lop2svtzn812vsuqvbme6iqfireyhnqps

James Blackman
Executive Editor
RCR Wireless News

RCR Top Stories

100 billion agents: Huawei says AI agents will transform mobile networks, driving demand for higher uplink capacity, real-time connectivity, and new service standards as the industry shifts toward infrastructure specifically designed for AI.

Softbank sets stall: Softbank’s ‘telco AI cloud’ seeks to manage AI workloads across a nationwide distributed GPU network. The architecture enables real-time physical AI by offloading complex robotics processing to the edge.

Telcos issues for AI: TIM chief Pietro Labriola (also) told MWC that networks, clouds, and AI form a single digital ecosystem – and urged telcos to fight their corner on sovereignty, governance, and new technology cycles.

‘Four big moves’: Thai operator True has a new strategy based on ‘four big moves’: customer experience, AI deployment, digital services, and workforce skills. Like everyone, the firm wants to be more than just a telecom provider.

Telcos must lead: Operators have a chance to claim a central role in the AI economy, says Spirent (now part of Keysight). To seize it, they must move beyond connectivity and deliver trusted, high-performance AI infrastructure and services.

Logos SMCI NVIDIA 2021 2400x700 1 1 1
In partnership with

AI-Powered Telecom Infrastructure
Supermicro, in collaboration with NVIDIA, delivers AI-powered infrastructure tailored for telcos, enhancing operational efficiency, network management, and customer experiences. Explore now 

Beyond the Headlines

Telco AI and KPIs: Agentic AI, sustainability mandates, edge-native infrastructure, and AI-augmented workforces are reshaping how operators run networks and serve customers, says enterprise software company IFS.

Singapore AI push: Bridge Data Centers is to invest up to S$5 billion to expand AI data centers in Singapore, while also exploring hydrogen and nuclear energy options to support rising regional demand for compute infrastructure.

Agents of chaos: Enterprises are scrambling as AI workloads splinter across the cloud-edge landspace. Equinix reckons the answer is in neutral interconnection hubs to orchestrate distributed infrastructure, and bring inference closer.

About AI infra: Get the latest on AI infrastructure – courtesy of Susana with the AI Infrastructure newsletter (fron RCR Tech); yesterday’s edition, still hot, discusses AI the impact on electricity costs, polarised between lower rates and regional bias.

Trump deadline: The March 11 deadline passed for US agencies to act on Trump’s AI order, requiring the Commerce Department to evaluate state AI laws deemed “onerous” – raising uncertainty and potential federal‑state clashes over AI policy.

What We're Reading

Open CPX MSA: Optical vendors Ciena, Coherent, Marvell, Molex, Samtec, and TeraHop have formed the Open CPX MSA to advance standards for co-packaged optical interconnects, required for next-generation AI data centre infrastructure.

Nvidia AI factories: NTT Data has announced Nvidia-powered “AI factories” to help enterprises operationalize AI. The platform is geared to support full-stack AI projects, enabling enterprises to accelerate adoption and measure returns.

Cisco AI optics: Cisco has new optical networking gear to support AI traffic growth. Updates include the Open Transport 3000 line system and higher-density NCS 1014 platforms – designed to increase fiber capacity, efficiency, scalability.

Singtel AI fund: Singtel venture arm Innov8 has launched a $250 million AI fund for high-growth AI startups. It aims to accelerate AI adoption across Singtel’s operations by backing tech that can be integrated into its networks and services.

Simpler private 5G: Antevia and Benetel have a deal to integrate the former’s 5G SHIFT platform with the latter’s open RAN radios with a mission to accelerate scalable private 5G deployments for mission-critical enterprise environments.

Events

Virtual Program
Explore the technologies, tools, strategies and partnerships powering the next generation of intelligent systems. This is where the backbone of AI innovation takes center stage. Register now 
 
Wi-Fi Forum, January 20th 2026
Join this RCR Wireless News‘ event to understand the current state of the Wi-Fi as we examine a myriad of evolving use cases and monetization strategies being deployed by industry. Register now 

Industry Resources

Webinar, September 18th
The journey to a fully autonomous network – The evolution of network automation and how Amdocs is leading the way

]]>
Microsoft, Google DeepMind, OpenAI back Anthropic in lawsuit against Pentagon https://rcrtech.com/ai-infrastructure/microsoft-google-deepmind-openai-back-anthropic-in-lawsuit-against-pentagon/ Susana Schwartz]]> Fri, 13 Mar 2026 16:39:13 +0000 https://rcrtech.com/?p=429707

Microsoft, Google DeepMind, OpenAI back Anthropic in lawsuit against Pentagon

Screenshot 2026-03-13 at 2.40.34 AM

As RCRTech reported last week, U.S. Secretary of War Pete Hegseth had issued an ultimatum to Anthropic CEO Dario Amodei about allowing the military unfettered access to Claude. When Amodei refused to bend on the issues of Claude’s use for autonomous weapons or for surveillance on American citizens, the Dept. of War and Trump administration blacklisted Anthropic as a “supply chain risk.” This week, Amodei filed two lawsuits against the Pentagon and Trump administration, alleging the designation is unlawful, retaliatory, and violates the company’s 1st and 5th amendment rights for free speech and due process, as well as the Administrative Procedure Act. Joining in the fight against Anthropic’s “blacklist” designation is Microsoft, which filed an amicus brief asking for a temporary restraining order (TRO) to block the Department of Defense  from designating the company as a “supply chain risk.”  In addition, a group of 37 researchers and engineers from OpenAI and Google DeepMind filed an amicus brief earlier in the week to support Anthropic. As stated by Google DeepMind chief scientist Jeff Dean, the overall sentiment is that punishing companies for safety limits will “chill open deliberation” in the field. Microsoft and Google DeepMind have financial ties to Anthropic, but OpenAI does not have direct ties, except on the periphery through a complex web of shared VCs and investment firms. The court has fast-tracked the hearing for the temporary restraining order, with a decision expected promptly. RCRTech will continue to report on this story. 

Susana 2

Susana Schwartz
Technology Editor
RCRTech

AI Infrastructure Top Stories

Agents of AI chaos: Enterprises are scrambling as AI workloads splinter across the cloud-edge landspace. Equinix reckons the answer is in neutral interconnection hubs to orchestrate distributed infrastructure, and bring inference closer

Singapore AI push: Bridge Data Centers is to invest up to S$5 billion to expand AI data centers in Singapore, while also exploring hydrogen and nuclear energy options to support rising regional demand for compute infrastructure.

About Open Telco AI: General AI models often fail to “speak telco,” leaving network operations in the lurch. A new global initiative led by AT&T, AMD, and the GSMA is betting on open-source collaboration and unique datasets to build the precision tools the industry actually needs.

AI Today: What You Need to Know

Microsoft backs Anthropic: In Anthropic’s lawsuit to overturn the Pentagon’s blacklisting as a “supply-chain risk,” Microsoft filed an amicus brief urging the northern California court to block the implementation of the blacklisting.

Nscale sets record: In what was the largest Series C funding round ever recorded for a European tech startup, Nscale secured $2 billion, with funding led by Aker ASA and 8090 Industries, and backing from Nvidia, Dell, and Lenovo.

FCC lashes out at Amazon: Amazon urged the FCC to reject SpaceX’s application to launch as many as 1 million satellites, to which Brendan Carr pointed out on X that Amazon is “roughly 1,000 satellites short of its deployment milestone.”

Arm to bolster Indonesian sovereignty: Arm CEO Rene Haas and Indonesian President Prabowo Subiant signed a deal in which Danantara Indonesia will leverage Arm’s compute platform, training 15,000 engineers in the Arm ecosystem.
 

Eaton acquisition of Boyd Thermal: Eaton has completed its acquisition of Boyd Thermal from Goldman Sachs Asset Management, expanding Boyd’s thermal components, systems and ruggedized solutions for data centers.

Blackstone and Advanced Cooling deal: Blackstone Blackstone Energy Transition Partners have entered into a definitive agreement to acquire a majority stake in ACT for thermal management and energy efficiency solutions. 

Dell and DOE partnership: CEO Michael Dell and DOE’s Darío Gil discuss partnering for AI infrastructure, and the ways in which the government is “leveraging integrated rack-scalable systems for commercial and enterprise AI data centers.”

Military AI debate continues: Lawfare examines Princeton’s Arvind Narayanan and Sayash Kapoor’s advocacy about military AI as an exception to “AI as a normal technology,” possessing “unique dynamics that require a deeper analysis.” 

Upcoming Events

This one-day virtual event will discuss the critical issues and challenges impacting the AI infrastructure ecosystem, examining the growth and evolution of the AI ecosystem as it scales and the need for flexible, sustainable solutions. 

Industry Resources

]]>
Microsoft, Google DeepMind, OpenAI back Anthropic in lawsuit against Pentagon nonadult
Bridge Data Centers invests to expand AI infra in Singapore https://rcrtech.com/ai-infrastructure-news/bridge-ai-infra-singapore/ Juan Pedro Tomás]]> Fri, 13 Mar 2026 00:39:00 +0000 https://rcrtech.com/?p=429703 Bridge Data Center, backed by Bain Capital, said the investment will support the development of next-generation digital infrastructure in the Asian nation

In sum – what to know:

Major AI infra investment – Bridge Data Centers plans to invest between S$3B and S$5B in Singapore to expand AI-ready data center infra and support regional demand for cloud and AI computing.

Energy strategies becoming critical – The company is exploring alternative power solutions including floating hydrogen generation and research into nuclear and other low-carbon energy sources for data centers.

Regional capacity expansion – While Singapore remains a connectivity hub, additional computing capacity is being developed across Malaysia, Thailand, and India to support growing AI workloads.

APAC data center firm Bridge Data Centers (BDC) plans to invest between S$3-S$5 billion ($2.34-$3.90 billion in Singapore to expand AI-ready data center infrastructure, as demand for computing capacity accelerates across the region.

The Singapore-headquartered company, backed by Bain Capital, said the investment will support the development of next-generation digital infrastructure and help reinforce Singapore’s role as a regional hub for AI and cloud services.

Company representatives said the investment is part of a broader effort to expand computing capacity and support regional demand for AI infrastructure.

The firm currently operates hyperscale data center campuses across Southeast Asia and India and aims to expand its regional capacity to about 2 gigawatts by 2030.

Energy supply and sustainability are becoming central considerations in data center expansion as AI workloads increase power consumption. Bridge Data Centers said it is exploring alternative energy solutions and partnerships aimed at strengthening long-term power strategies for AI infrastructure.

One initiative involves a collaboration with Concord New Energy to develop what the partners describe as Singapore’s first floating hydrogen power generation system designed for data center applications.

Bridge is also collaborating with several technology vendors to develop cooling and power systems designed for high-density AI computing environments.

While Singapore serves as the company’s headquarters and a key interconnection hub, the company said it expects future AI capacity to be distributed across multiple markets in Asia-Pacific.

Bridge Data Centers is assessing nuclear energy as a potential power source for future data center infrastructure, as operators search for low-carbon energy options capable of supporting rapidly growing AI workloads.

The company recently announced it has signed a letter of intent with Singapore’s A*STAR Institute of High Performance Computing and engineering consultancy HY with the aim of evaluating the feasibility of nuclear energy for next-generation, AI-ready data centers.

Under the terms of the collaboration, the research institute will conduct physics-based simulations to analyze risks related to hydrogen storage and its use as a sustainable energy source for data center operations. The results will then inform a broader risk assessment focused on possible nuclear power plant designs and their system architectures.

]]>
Cloud elasticity at the metro edge https://rcrtech.com/rcrwirelessnews/cloud-elasticity-at-the-metro-edge/ James Blackman]]> Thu, 12 Mar 2026 18:18:55 +0000 https://rcrtech.com/?p=429696

It’s all gone hyper, of course. Nvidia’s $2 billion arrangement with Nebius to deliver a full-stack AI cloud is not just about flashy GPUs, but about owning the infrastructure that will power AI infrastructure for – for what: a decade; a couple of years? – a while, anyway. At the same time, the GFiber-Astound merger says the fiber market has the same idea: scale, reach, speed. Bigger networks mean lower costs, faster rollouts, and a better grip on the spiralling (AI) demand for bandwidth.

 

But compute and connectivity are only half the story; there are other subplots, sometimes forgotten. Equinix’s interconnection hub – discussed here yesterday, discussed at greater length here today – reveals a third force in this weird AI future: neutral platforms to orchestrate fragmented infrastructure to manage a multiplying ecosystem. Seems like good logic and a smart play from Equinix, shoring up its own position in this hyperscale parade.

 

Even if the AI market starts to sound like the IoT market. But hyperscale theory comes unstuck in silos, stacked across the edge-cloud continuum in different networks and data centers. Interesting, also, to see Equinix is looking to tackle familiar last-mile service bottlenecks with telcos at the same time as it is solving complex data center interoperability in the cloud. Maybe, winning at AI will be about this orchestration layer? 

 

Clearly, there are architectural issues everywhere – which should be addressed if cloud-style elasticity is to triumph in rangier AI infrastructure for agents and inference workloads.

James-Newsletter-rh9kgqsb2fng5qdzlyiq8g8ux9jouqcjh68bxs270g-10-rjao7r1w4lop2svtzn812vsuqvbme6iqfireyhnqps

James Blackman
Executive Editor
RCR Wireless News

RCR Top Stories

Agents of chaos: Enterprises are scrambling as AI workloads splinter across the cloud-edge landspace. Equinix reckons the answer is in neutral interconnection hubs to orchestrate distributed infrastructure, and bring inference closer.

Trump deadline: The March 11 deadline passed for US agencies to act on Trump’s AI order, requiring the Commerce Department to evaluate state AI laws deemed “onerous” – raising uncertainty and potential federal‑state clashes over AI policy.

AI power crunch: At Metro Connect USA, Marc Ganzi, chief executive at DigitalBridge discusses the power crunch threatening AI infrastructure expansion plans, and shared strategies to get around the crisis.

Telco AI models: General AI models fail to “speak telco,” leaving operators in the lurch. A new initiative led by AT&T, AMD, and the GSMA is betting on open-source collaboration and unique datasets to build the precision tools the industry needs.

Testing HCF: Olivier Côté, product line manager at EXFO, digs into the testing challenges with hollow core fiber (HCF), and outlines how methods are evolving to accommodate its physical peculiarities.

Logos SMCI NVIDIA 2021 2400x700 1 1 1
In partnership with

AI-Powered Telecom Infrastructure
Supermicro, in collaboration with NVIDIA, delivers AI-powered infrastructure tailored for telcos, enhancing operational efficiency, network management, and customer experiences. Explore now 

Beyond the Headlines

GenAI cloud boom: Cloud infrastructure spending reached $119 billion in the last quarter of 2025, driven largely by generative AI workloads, while new AI-focused “neocloud” providers emerge as growing competitors to hyperscale platforms.

Cyient at MWC: Cyient used MWC to promote a “human + AI” approach to autonomous networks, arguing that combining AI with human expertise will help telcos advance toward Level-4 autonomy and new revenue opportunities.

AI is eating the DC… and networks are on the hook. Data centers are teetering under their workloads, and being distributed far and wide in search of power, and obeyance of compliance and regulation. The challenge is to connect them.

P5G for physical AI: Ericsson has tie-ups with Future Tech and NTT Data to combine private 5G and physical AI in Industry 4.0; its appears to be picking up where Nokia has dropped off. Future Technologies is growing 35% per year.

India sets AI vision: India’s comms minister outlined the country’s telco AI vision at MWC, highlighting rapid 5G expansion, falling mobile data prices, and major national programs to extend connectivity and develop 6G.

What We're Reading

Nvidia and Nebius: Nvidia is investing $2 billion in a deal with Nebius to co‑develop a hyperscale, full‑stack AI cloud, integrating next‑gen infrastructure and software to deploy over 5 GW of compute and accelerate global AI workload growth.

Google cuts GFiber stake: GFiber (Google Fiber) will combine with Astound Broadband to form a major independent US fiber provider, with Stonepeak as majority owner and Alphabet as a minority stakeholder.

Bell pushes Canada AI: Bell and Coveo have a sovereign AI deal to modernize digital services for Canadian governments and regulated industries, combining Coveo’s AI platform with Bell’s AI ‘fabric’ while keeping data under Canadian law.

Deoleo taps Telefónica: Telefónica Tech is partnering with olive oil maker Deoleo to drive digital transformation with blockchain for traceability, AI for quality and operations, and digitalised plant maintenance to boost efficiency and transparency.

1NCE integrates LoRa: 1NCE and Netmore will integrate Netmore’s LoRaWAN connectivity into the 1NCE OS platform, giving customers unified access to both cellular and LoRaWAN IoT coverage through one system.

Events

Virtual Program
Explore the technologies, tools, strategies and partnerships powering the next generation of intelligent systems. This is where the backbone of AI innovation takes center stage. Register now 
 
Wi-Fi Forum, January 20th 2026
Join this RCR Wireless News‘ event to understand the current state of the Wi-Fi as we examine a myriad of evolving use cases and monetization strategies being deployed by industry. Register now 

Industry Resources

Webinar, September 18th
The journey to a fully autonomous network – The evolution of network automation and how Amdocs is leading the way

]]>
Data centers: they can build capacity, but can they protect ‘affordability’? https://rcrtech.com/ai-infrastructure/data-centers-they-can-build-capacity-but-can-they-protect-affordability/ Susana Schwartz]]> Thu, 12 Mar 2026 16:24:52 +0000 https://rcrtech.com/?p=429662

Data centers: they can build capacity, but can they protect ‘affordability’?

Screenshot 2026-03-12 at 10.55.48 AM

Can AI data centers “add resilience to the grid and boost affordability,” as stated by Alphabet President and CIO Ruth Porat on stage at yesterday’s BlackRock US Infrastructure Summit, held in Washington, D.C.? Porat announced that Google “just closed” on an acquisition of Intersect Power with the intent to build out capacity, and to become a “net investor of capacity and net contributor to the communities in which we build data centers.” Porat then went on to say data centers help “protect affordability,” something she backed by citing a 2025 Lawrence Berkeley National Laboratory (LBNL) and Brattle Group study that suggested data centers lower average electricity rates by allowing utilities to spread fixed infrastructure costs across a larger consumer base, and through more efficient grid management. She said, “States without data centers have seen electricity prices increase faster than states with data centers, on average.” That statement is a bit misleading, however, as it conflates low base rates with percentage growth, while ignoring distinct regional cost drivers. In other words, states without major data center hubs historically have lower electricity rates than states with data centers, which means a nominal increase results in a larger percentage hike (with the final bill still significantly lower than in high-cost data center hubs). RCRTech will report on other news from the Summit, so check back tomorrow for more highlights. Be sure to read our “Top Stories” and “What You Need to Know,” below. 

Susana 2

Susana Schwartz
Technology Editor
RCRTech

AI Infrastructure Top Stories

AI-generated synthetic data: In sectors where laws limit the type of customer records and data that can be used for AI/ML, synthetic data is providing artificial datasets that statistically mirror actual customer behavior without real data points

SMF reins, but not for long: Single-mode fiber (SMF) remains the most versatile medium of connectivity for long-haul and data center interconnects, but higher-capacity/lower-latency fiber specifically engineered for AI/ML workloads will win.

DigitalBridge CEO issues warning: Speaking at MetroConnect, DigitalBridge CEO Marc Ganzi warned “A will-cert letter does not mean you have a connection date,” noting that developers are looking at connection dates as far out as 2030 and 2032.

AI Today: What You Need to Know

Chris Wright on energy prices in Colorado: Speaking this week at Xcel Energy’s Fort St. Vrain Generating Station, U.S. Energy Secretary Chris Wright pushed for more natural gas and nuclear development in the state to lower electricity prices.

BlackRock Infrastructure Summit: Yesterday, Wright joined Alphabet President and CIO Ruth Porat, NextEra Energy CEO John Ketchum, and Global Infrastructure Partners’ Salim Samaha at the 2026 Infrastructure Summit in Washington.

Iran war impact on chip supply chain:  Analysts warn that regional conflicts in the Middle East can threaten the supply of critical materials like helium and bromine, which are essential for chip manufacturing.

High-density optical connectivity: AI infrastructure is pushing optical connectivity into more demanding environments, which is why Corning Optical Communications is licensing PRIZM® TMT, with US Conec as the first licensee of optical ferrule tech.

Blue Owl’s $100+ billion pipeline: Blue Owl Capital is rapidly expanding AI infrastructure, with $1.7 billion raised for its Digital Infrastructure Trust, a portfolio of 100+ data center assets, and financial relationships with Meta and Oracle.  

Data center electricity demands: EPRI “Powering Intelligence” research projects DCs will consume 9% to 17% of U.S. electricity by 2030, up from 4% to 5% today – 60% higher than prior scenarios, reflecting fast pace of DC development. 

Meta’s custom silicon: Soon after massive Nvidia, AMD deals, Meta revealed four custom, in-house MTIA custom chips (manufactured by TSMC) to reduce reliance on 3rd-party silicon and manage long-term infrastructure costs.

Upcoming Events

This one-day virtual event will discuss the critical issues and challenges impacting the AI infrastructure ecosystem, examining the growth and evolution of the AI ecosystem as it scales and the need for flexible, sustainable solutions. 

Industry Resources

]]>
Data centers: they can build capacity, but can they protect 'affordability'? nonadult