Switch https://www.switch.com/ World-Renowned Data Centers and Technology Solution Ecosystems Mon, 16 Mar 2026 20:24:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://www.switch.com/wp-content/uploads/2022/11/cropped-cropped-Karma_Square-sm-32x32.png Switch https://www.switch.com/ 32 32 Switch Integrates NVIDIA Omniverse DSX Blueprint into Switch’s EVO AI Factories https://www.switch.com/switch-integrates-nvidia-omniverse-dsx-blueprint-into-switchs-evo-ai-factories/ Mon, 09 Mar 2026 17:07:43 +0000 https://www.switch.com/?p=35420 Switch® announced today that they have integrated the NVIDIA Omniverse DSX Blueprint into their EVO AI Factory™ architecture and LDC EVO™ operating system. LDC EVO, combined with NVIDIA Omniverse libraries and OpenUSD, delivers high-fidelity operations across Switch’s deployed portfolio.

The post Switch Integrates NVIDIA Omniverse DSX Blueprint into Switch’s EVO AI Factories appeared first on Switch.

]]>
.feature-image-text { font-size: .6rem; margin-top: -30px; }

Simulation to Reality: Switch’s EVO AI Factories shown through digital twin simulation using NVIDIA Omniverse DSX Blueprint (left) and real-world deployment (right).

Switch’s Living Data Center (LDC) EVO transforms AI factories from human-managed infrastructure to an automated, intelligent system.

LAS VEGAS, NV – March 16, 2026 – Switch® announced today that they have integrated the NVIDIA Omniverse DSX Blueprint into their EVO AI Factory architecture and LDC EVOoperating system. LDC EVO, combined with NVIDIA Omniverse libraries and OpenUSD, delivers high-fidelity operations across Switch’s deployed portfolio. LDC EVO’s workflows, intelligence and modeling deliver live, physics-accurate visual representation of the EVO AI Factory.

Traditional data centers run on DCIM, or data center infrastructure management, where humans make decisions assisted by monitoring tools. AI factories operate at extreme density, creating operational complexity that exceeds what DCIM was designed to manage. LDC EVO replaces this model. LDC EVO presents the automation of every system in the facility in near real-time, maintaining an updated 3D digital twin of the complete AI factory, providing our people with unprecedented support and capabilities.

Every NVIDIA DGX deployment requires a facility engineered to its specifications. Switch’s EVO AI Factory is that facility. Switch enables its customers to deploy NVIDIA accelerated computing on Dell PowerEdge servers at extreme density from day one. Switch helped deliver deployments of NVIDIA Grace Blackwell on Dell PowerEdge servers in EVO AI Factories. LDC EVO presented capabilities to allow its customers to validate these hardware configurations before physical deployments.

Leadership Perspectives

“LDC EVO is the operating system for Switch’s EVO AI Factory, orchestrating the modular and configurable campus architecture that enables hybrid cooling and supports extreme AI densities,” said Zia Syed, Chief Technology Officer of Switch. “It’s built to operate every generation of NVIDIA reference design, including the Rubin DSX architecture. Leveraging NVIDIA Omniverse libraries and OpenUSD for digital twins, we’ve layered in automation workflows and operational intelligence to unify deployments. LDC EVO presents dynamic operations of an AI Factory at scale.”

“Gigawatt-scale AI factories require a shift toward autonomous, telemetry-driven infrastructure capable of orchestrating extreme power and cooling densities in real time,” said Vladimir Troy, Vice President of AI Infrastructure at NVIDIA. “The integration of the NVIDIA Omniverse DSX blueprint into the Switch LDC EVO operating system provides the high-fidelity simulation and operational intelligence necessary to optimize the deployment of next-generation NVIDIA AI infrastructure.”

The Switch Ecosystem

We brought together the expertise of leading suppliers across the AI infrastructure ecosystem including NVIDIA, Dassault Systèmes, Cadence, ETAP, Schneider Electric, SUSE, Dell Technologies, Oxide Computer Company and Procore Technologies, Inc.

Within LDC EVO, these collaborating technologies operate as integrated capabilities: thermal modeling, electrical simulation, reality capture, construction lifecycle management and facility telemetry are synchronized into a single presentational environment. The result is that teams can simulate, monitor and adjust operations—all within one interface that improves every operational cycle.

This will be showcased at NVIDIA GTC 2026, where Switch will feature its EVO AI Factory in the DSX AI Infrastructure Pavillion, Booth #91.

About Switch

Switch, founded in 2000 by CEO Rob Roy, stands at the forefront as the leading data center campus designer, builder and operator. As the AI, cloud and enterprise data center experts, Switch provides the most modular, scalable and sustainable data centers to the most discerning clients. The company offers a comprehensive, future-proof portfolio ranging from highly dense liquid cooled AI to hyperscale cloud and the industry’s highest rated and most secure enterprise data centers. To learn more, visit www.switch.com and follow Switch on LinkedIn, Facebook and X.

Cautionary Statement Regarding ForwardLooking Statements

This press release may contain forward‑looking statements, including statements about our future operations, plans, objectives, expectations, or performance. These statements are based on current assumptions, estimates, and projections and are not guarantees of future results. Forward‑looking statements involve risks, uncertainties, and other factors—many of which are beyond our control—that could cause actual results to differ materially from those expressed or implied. These risks and uncertainties may include changes in market conditions, competitive pressures, operational challenges, economic factors, and other business risks. Readers are cautioned not to place undue reliance on forward‑looking statements, which speak only as of the date of this press release. We undertake no obligation to update or revise these statements in light of new information, future events, or otherwise.

The post Switch Integrates NVIDIA Omniverse DSX Blueprint into Switch’s EVO AI Factories appeared first on Switch.

]]>
Axios Interview on Building the Future: Rob Roy on AI, Power and What Comes Next https://www.switch.com/rob-roy-on-ai-power-and-what-comes-next/ Fri, 30 Jan 2026 17:22:04 +0000 https://www.switch.com/?p=35383 AI is reshaping how the world computes, powers and builds critical infrastructure. At Switch, we’re helping define that evolution, from purpose-built AI Factories to rethinking the relationship between compute demand and energy systems. As Founder and CEO of Switch, Rob Roy has spent more than two decades designing infrastructure for the future, anticipating shifts in computing, energy and global demand long before they […]

The post Axios Interview on Building the Future: Rob Roy on AI, Power and What Comes Next appeared first on Switch.

]]>
AI is reshaping how the world computes, powers and builds critical infrastructure. At Switch, we’re helping define that evolution, from purpose-built AI Factories to rethinking the relationship between compute demand and energy systems. As Founder and CEO of Switch, Rob Roy has spent more than two decades designing infrastructure for the future, anticipating shifts in computing, energy and global demand long before they became mainstream conversations. 


That long-term perspective was on full display during a recent Axios Fireside Chat at the Schneider Electric Innovation Summit, where Rob Roy sat down with Axios reporter Ben Geman to discuss the intersection of AI, energy and next-generation infrastructure. In the conversation, Rob Roy addressed some of the most pressing questions facing the industry today—from whether AI is a bubble, to how data centers can strengthen the power grid, to how AI energy needs will accelerate commercial fusion power by “two or three decades.” 

Below, you can watch the full Axios interview as Rob Roy outlines why AI is not just transforming data centers—but accelerating an energy renaissance that will shape the decades ahead. 

Axios. Used with permission. axios.com

The post Axios Interview on Building the Future: Rob Roy on AI, Power and What Comes Next appeared first on Switch.

]]>
Ormat Technologies Signs 20-Year PPA with Switch for ~13 MW of Carbon-Free Geothermal Capacity to Power Data Centers https://www.switch.com/ormat-technologies-signs-20-year-ppa-with-switch-for-13mw-of-carbon-free-geothermal-capacity-to-power-data-centers/ Fri, 02 Jan 2026 17:39:53 +0000 https://www.switch.com/?p=35293 New geothermal PPA with Switch enhances the economics of Ormat’s Salt Wells power plant RENO, Nevada., January 5, 2026 – Ormat Technologies Inc. (NYSE: ORA) (the “Company” or “Ormat”), a leading geothermal and renewable energy company, today announced the signing of a 20-year Power Purchase Agreement (PPA) with Switch, the premier provider of AI, cloud […]

The post Ormat Technologies Signs 20-Year PPA with Switch for ~13 MW of Carbon-Free Geothermal Capacity to Power Data Centers appeared first on Switch.

]]>
New geothermal PPA with Switch enhances the economics of Ormat’s Salt Wells power plant

RENO, Nevada., January 5, 2026 – Ormat Technologies Inc. (NYSE: ORA) (the “Company” or “Ormat”), a leading geothermal and renewable energy company, today announced the signing of a 20-year Power Purchase Agreement (PPA) with Switch, the premier provider of AI, cloud and enterprise data centers. This agreement represents Ormat’s first direct PPA with a data center operator, highlighting Ormat’s leading capabilities in geothermal energy production and the growing demand for sustainable energy solutions to serve the data center industry.

Under the terms of the agreement, Switch will purchase approximately 13MW of clean, renewable energy from Ormat’s Salt Wells geothermal power plant located near Fallon, Nevada. As part of the agreement Ormat has the option to further expand the facility’s output to Switch by adding an approximately 7MW Solar PV facility, which will serve the auxiliary power needs of the geothermal power plant. The combined output will help support the power needs of Switch’s Nevada data centers, aligning with their commitment to sustainability and carbon reduction.

Energy deliveries under the PPA are scheduled to commence in the first quarter of 2030, following the completion of a major upgrade to the Salt Wells power plant, which is expected to be finalized by the second quarter of 2026.

Doron Blachar, Chief Executive Officer of Ormat Technologies, commented, “We are excited to partner with Switch, a leader in the data center industry, to supply reliable, zero-emission power from our Salt Wells geothermal facility. This agreement not only advances Switch’s sustainability goals but also underscores the growing demand for renewable energy within the data center sector. Upon completion of the Salt Wells upgrade, we will be able to deliver approximately 13MW of geothermal energy to Switch, with the potential for further expansion through the addition of a Solar PV facility, highlighting both the enhanced revenue opportunities and strategic value of our power plants.”

Blacher concluded, “Additionally, as we launch this partnership, we see potential for future recontracting of over 100 MW of our existing fleet under this framework. We are encouraged at the opportunity to continue growing this relationship with potential PPA expansion as well as additional new agreements to supply geothermal power to Switch as they continue scaling their business into the strong demand backdrop for the data center industry and Switch’s specific capabilities.” 

“We are proud to enhance our diverse portfolio of renewable, Nevada-based energy sources and deepen our commitment to powering Switch’s data centers with renewable energy through this new long-term agreement with Ormat,” said Alise Porto, SVP of Energy & Sustainability at Switch. “As demand for AI and high-performance digital infrastructure accelerates, securing reliable, carbon-free baseload power is essential to supporting our customers and sustaining our growth. Geothermal energy offers the resiliency and sustainability profile required for the next generation of AI and cloud workloads, and this partnership enhances our ability to deliver world-class performance with a minimal environmental footprint. We look forward to continuing to scale the power needs of our campuses to meet the strong demand for our data center platform.”

About Ormat Technologies

With over five decades of experience, Ormat Technologies, Inc. is a leading geothermal company, and the only vertically integrated company engaged in geothermal and recovered energy generation (“REG”), with robust plans to accelerate long-term growth in the energy storage market and to establish a leading position in the U.S. energy storage market. The Company owns, operates, designs, manufactures and sells geothermal and REG power plants primarily based on the Ormat Energy Converter – a power generation unit that converts low-, medium- and high-temperature heat into electricity. The Company has engineered, manufactured and constructed power plants, which it currently owns or has installed for utilities and developers worldwide, totaling approximately 3,400MW of gross capacity. Ormat leveraged its core capabilities in the geothermal and REG industries and its global presence to expand the Company’s activity into energy storage services, solar Photovoltaic (PV) and energy storage plus Solar PV. Ormat’s current total generating portfolio is 1,618MW with a 1,268MW geothermal and solar generation portfolio that is spread globally in the U.S., Kenya, Guatemala, Indonesia, Honduras, and Guadeloupe, and a 350MW energy storage portfolio that is located in the U.S.

About Switch

Switch, founded in 2000 by CEO Rob Roy, stands at the forefront as the leading data center campus designer, builder and operator. As the AI, cloud and enterprise data center experts, Switch provides the most modular, scalable and sustainable data centers to the most discerning clients. The company offers a comprehensive, future-proof portfolio ranging from highly dense liquid cooled AI to hyperscale cloud and the industry’s highest rated and most-secure enterprise data centers. To learn more, visit www.switch.com and follow Switch on LinkedIn, Facebook and X.

Ormat’s Safe Harbor Statement

Information provided in this press release may contain statements relating to current expectations, estimates, forecasts and projections about future events that are “forward-looking statements” as defined in the Private Securities Litigation Reform Act of 1995. All statements, other than statements of historical facts, included in this press release that address activities, events or developments that we expect or anticipate will or may occur in the future, including such matters as our projections of annual revenues and Adjusted EBITDA, expenses and debt service coverage with respect to our debt securities, future capital expenditures, business strategy, competitive strengths, goals, development or operation of generation assets, legal, market, industry and geopolitical developments and incentives, demand for renewable energy, and the growth of our business and operations, are forward-looking statements. When used in this press release, the words “may”, “will”, “could”, “should”, “expects”, “plans”, “anticipates”, “believes”, “estimates”, “predicts”, “projects”, “potential”, or “contemplate” or the negative of these terms or other comparable terminology are intended to identify forward-looking statements, although not all forward-looking statements contain such words or expressions. These forward-looking statements generally relate to Ormat’s plans, objectives and expectations for future operations and are based upon its management’s current estimates and projections of future results or trends. Although we believe that our plans and objectives reflected in or suggested by these forward-looking statements are reasonable, we may not achieve these plans or objectives.  Actual future results may differ materially from those projected as a result of certain risks and uncertainties and other risks described under “Risk Factors” as described in Ormat’s most recent annual report, and in subsequent filings.

These forward-looking statements are made only as of the date hereof, and, except as legally required, we undertake no obligation to update or revise the forward-looking statements, whether as a result of new information, future events or otherwise.

The post Ormat Technologies Signs 20-Year PPA with Switch for ~13 MW of Carbon-Free Geothermal Capacity to Power Data Centers appeared first on Switch.

]]>
Schneider Electric and Switch Expand Partnership with $1.9 Billion Supply Capacity Agreement to Power AI Factories https://www.switch.com/schneider-electric-and-switch-expand-partnership-with-1-9-billion-supply-capacity-agreement-to-power-ai-factories/ Wed, 19 Nov 2025 22:58:57 +0000 https://www.switch.com/?p=35246 Largest data center cooling project in North America marks first deployment of Schneider Electric’s Uniflair™ chillers in the U.S. Added capacity further positions Switch as a leading data center campus designer, builder, and operator Agreement represents Schneider Electric’s largest cooling services engagement to date LAS VEGAS, November 19, 2025 – Schneider Electric, a global energy […]

The post Schneider Electric and Switch Expand Partnership with $1.9 Billion Supply Capacity Agreement to Power AI Factories appeared first on Switch.

]]>
  • Largest data center cooling project in North America marks first deployment of Schneider Electric’s Uniflair chillers in the U.S.
  • Added capacity further positions Switch as a leading data center campus designer, builder, and operator
  • Agreement represents Schneider Electric’s largest cooling services engagement to date
  • LAS VEGAS, November 19, 2025 – Schneider Electric, a global energy technology leader, and Switch, a premier provider of AI, cloud and enterprise data centers, today announced a two-phase supply capacity agreement (SCA) totaling $1.9 billion in sales. The milestone deal includes prefabricated power modules and the first North American deployment of chillers. The announcement was unveiled at Schneider Electric’s Innovation Summit North America in Las Vegas, convening more than 2,500 business leaders and market innovators to accelerate practical solutions for a more resilient, affordable and intelligent energy future.


    Schneider Electric and Switch have evolved their longstanding partnership to support the growing AI and hyperscale computing demand of AI factories. Hyperscalers are growing rapidly to meet surging AI demand, which is projected to account for over 35% of global data center workloads by 2030, driving a 160% increase in data center power demand. By integrating advanced cooling and power technologies into Switch’s hyperscale data center designs and existing hybrid air and liquid cooling systems, the companies are laying the foundation for more resilient, scalable and future-ready infrastructure. The SCA model provides guaranteed capacity, advanced cooling solutions and full-service support, while preserving the flexibility needed for rapidly evolving AI workloads and customer requirements.


    Critically, the solution is built to scale AI capacity without scaling energy demand. Uniflair chillers use oil-free, variable-speed centrifugal compressors and integrated free cooling to match capacity to real-time IT load, preventing overcooling and reducing run hours. Prefabricated power modules come with standardized, pretested layouts that optimize airflow and containment to increase economizer hours and reduce cooling energy compared with traditional builds.


    “As AI continues to reshape the digital landscape, we see an opportunity not just to meet demand, but to define what the next generation of data centers can achieve,” said Vandana Singh, SVP of Secure Power North America at Schneider Electric. “By combining modular power, advanced cooling technologies, and a long-term service model, we’re helping create infrastructure that anticipates the future of AI workloads, making energy smarter, more adaptable, and more sustainable.”


    The agreement is the largest cooling service engagement Schneider Electric has ever undertaken, with energy technologies including prefabricated power modules and Uniflair chillers, high-efficiency cooling systems for mission-critical environments. It also includes a three-year full-service contract for chillers, ensuring Switch customers benefit from reliable, long-term support.


    Switch operates five exascale U.S. data center campuses across Las Vegas, Tahoe Reno, Atlanta, Grand Rapids and Austin. These facilities reside on thousands of acres with multi-gigawatt power capacities. Switch’s AI factories are engineered to power and cool up to 2MW per rack. These facilities position Switch at the leading edge of innovation with their purpose-built designs to support extreme density demands of next-generation AI workloads, aligned with NVIDIA’s DGX and MGX roadmaps.


    “As the premium provider to the world’s leading companies, Switch is focused on enabling the next wave of AI and digital innovation through world-class infrastructure, said Jason Hoffman, Chief Strategy Officer at Switch. Expanding our relationship with Schneider Electric advances that mission as we pioneer a new class of AI-ready infrastructure, designed for operational insight, extreme efficiency and the flexibility to evolve as technology advances to meet the growing demand of our customers.”


    About Schneider Electric

    Schneider Electric is a global energy technology leader, driving efficiency and sustainability by electrifying, automating, and digitalizing industries, businesses, and homes. Its technologies enable buildings, data centers, factories, infrastructure, and grids to operate as open, interconnected ecosystems, enhancing performance, resilience, and sustainability. The portfolio includes intelligent devices, software-defined architectures, AI-powered systems, digital services, and expert advisory. With 160,000 employees and one million partners in over 100 countries, Schneider Electric is consistently ranked among the world’s most sustainable companies.

    www.se.com


    About Switch

    Switch, founded in 2000 by CEO Rob Roy, stands at the forefront as the leading data center campus designer, builder and operator. As the AI, cloud and enterprise data center experts, Switch provides the most modular, scalable and sustainable data centers to the most discerning clients. The company offers a comprehensive, future-proof portfolio ranging from highly dense liquid cooled AI to hyperscale cloud and the industry’s highest rated and most-secure enterprise data centers. To learn more, visit www.switch.com and follow Switch on LinkedIn, Facebook and X.

    The post Schneider Electric and Switch Expand Partnership with $1.9 Billion Supply Capacity Agreement to Power AI Factories appeared first on Switch.

    ]]>
    Switch is Evolving AI Factories with NVIDIA Omniverse DSX Blueprint https://www.switch.com/switch-is-evolving-ai-factories-with-nvidia-omniverse-dsx-blueprint/ Tue, 28 Oct 2025 17:18:18 +0000 https://www.switch.com/?p=35192 Switch designs, builds and operates industry-leading, exascale data center ecosystems that power the growth of AI, cloud, and enterprise infrastructure. EVO AI Factories are Switch’s modular AI Factory solution, which has seen “industry first” deployments of NVIDIA Grace Blackwell servers, marking a significant leap forward in high-performance AI computing and AI capabilities. To achieve this, […]

    The post Switch is Evolving AI Factories with NVIDIA Omniverse DSX Blueprint appeared first on Switch.

    ]]>
    Switch designs, builds and operates industry-leading, exascale data center ecosystems that power the growth of AI, cloud, and enterprise infrastructure. EVO AI Factories are Switch’s modular AI Factory solution, which has seen “industry first” deployments of NVIDIA Grace Blackwell servers, marking a significant leap forward in high-performance AI computing and AI capabilities. To achieve this, we are leveraging the NVIDIA Omniverse DSX Blueprint –  a groundbreaking data center digital twin system built on NVIDIA Omniverse technology.

    The Switch Vision: Integrated Excellence Across the AI Factory Lifecycle

    Switch’s digital twin strategy is a vision for Integrated Excellence, creating a unified, real-time source of truth that guides informed, precise, and faster decisions. By spanning the entire lifecycle of an AI factory, we are removing the silos that typically separate specialized systems and teams.

    Our goal is singular: to create a dynamic model that evolves with the scale and complexity of our facilities, ensuring everyone is aligned with a common, current reality.

    A Pathway for Multi-Generation, Gigawatt-Scale Build-Outs

    The Omniverse DSX Blueprint provides a dynamic, parametric modeling framework that serves as a living digital reference for every phase of AI Factory design, construction, and operation. This parametric foundation allows Switch to rapidly iterate and optimize across generations of EVO AI Factories, from site planning to rack-level deployment, all while maintaining accuracy and performance integrity at unprecedented scale.

    The result is not just faster design cycles, but smarter ones: Switch’s Digital Twin enables the simulation of complex thermal, electrical, and mechanical systems, empowering engineers to fine-tune efficiency before a single beam is set.

    Precision through Digital Integration

    By integrating the Omniverse DSX Blueprint and the Omniverse platform with our proprietary LDC EVO (Living Data Center EVO) system, Switch brings together our IT and OT (Operational Technology) data into a single, mission-critical operational tool, providing Switch powerful insight for collaboration and analysis across every stakeholder. This allows Switch to design for the DNA of AI Factories: extreme power density capabilities (up to 2MW per rack), advanced hybrid air and liquid cooling infrastructure and the flexibility to co-evolve with NVIDIA’s accelerated roadmap from NVIDIA Blackwell infrastructure, to next-generation NVIDIA Rubin systems, and beyond.

    Designers, engineers and operators can all interact within the same virtual environment, identifying design issues early through 3D model clash detection and constructability reviews. This integration transforms potential design conflicts into proactive improvements, saving time, cost, and resources.

    Operational Excellence with Immersive Review 

    Beyond construction, Switch is extending this digital capability into the operational phase. Through immersive design reviews teams can walkthrough operational processes within the Omniverse DSX Blueprint, allowing real-time exploration and simulation of maintenance and performance scenarios. This human-in-the-loop feedback not only enhances operational readiness but also drives continuous improvement throughout the facility’s lifecycle. 

    Switch and NVIDIA are working together to redefine how AI infrastructure is efficiently designed, built, and operated, from the first watt to the next generation. 

    The post Switch is Evolving AI Factories with NVIDIA Omniverse DSX Blueprint appeared first on Switch.

    ]]>
    Switch Raises $659 Million in Fourth Data Center ABS Offering https://www.switch.com/switch-raises-659-million-in-fourth-data-center-abs-offering/ Fri, 17 Oct 2025 23:41:32 +0000 https://www.switch.com/?p=35182 Market-leading data center ABS issuer with $3.5 billion in ABS proceeds raised since 2024 LAS VEGAS – October 20, 2025 – Switch, a premier provider of AI, cloud and enterprise data centers, today announced the closing of its fourth asset-backed securities (“ABS”) offering, raising nearly $659 million. The Class A-2 Notes are rated AAA, AA […]

    The post Switch Raises $659 Million in Fourth Data Center ABS Offering appeared first on Switch.

    ]]>
    Market-leading data center ABS issuer with $3.5 billion in ABS proceeds raised since 2024

    LAS VEGAS – October 20, 2025 – Switch, a premier provider of AI, cloud and enterprise data centers, today announced the closing of its fourth asset-backed securities (“ABS”) offering, raising nearly $659 million. The Class A-2 Notes are rated AAA, AA (low), A (low)and the Class B Notes are rated BBB (low) by DBRS Morningstar. This transaction marks Switch’s fourth ABS offering, bringing total ABS issuance to approximately $3.5 billion and making Switch the largest single issuer of data center ABS since 2024. All of Switch’s ABS issuances qualify as secured green bonds, underscoring the company’s commitment to sustainability and responsible growth.

    Proceeds from this issuance will be used to fund Switch’s growth strategy, which includes ongoing development at each of its five campuses for Hyperscale, AI, and enterprise customers. In July 2025, Switch announced it had retired all $6.5 billion of bank debt incurred during its 2022 take-private. This ABS issuance marks Switch’s first securitization with proceeds dedicated entirely to fund new development.

    This issuance reflects the strength and scalability of Switch’s enterprise ABS platform, which now encompasses 10 data centers across four geographically diverse campuses, serving nearly 500 customers, with over 70% of revenue generated from tenants rated investment grade. These strong credit characteristics led Switch to introduce the first AAA-rated tranche in non-hyperscale data center ABS, marking a sector milestone that reinforces the company’s leadership in data center capital markets.

    “The success of this transaction, and the overall growth of our platform, clearly demonstrate that our formula of leading-edge technology combined with exascale campus deployments in Tier 1 markets continues to resonate with customers and investors alike,” said Madonna Park, Chief Financial Officer of Switch. “As our deep pipeline of fully leased multi-tenant and Hyperscale assets continues to stabilize, we expect to remain an active issuer across the ABS and broader capital markets.”

    “With roughly $6 billion of stabilized asset financings completed to date, we have the scale and track record to continue to efficiently recycle capital while supporting the largest AI, cloud and enterprise customers, as they grow with Switch,” she added.

    In addition to closing its ABS transaction, Switch was recently awarded “2025 Growth Story of the Year” by TMT Finance. Switch has also received recognition this year for its capital markets achievements by IJ Global, PFIA and Proximo Infrastructure.

    Transaction Advisors and Counsel

    Wells Fargo Securities, LLC served as Co-Structuring Advisor and Lead Left Bookrunner and RBC Capital Markets, LLC served as Co-Structuring Advisor and Joint Active Bookrunning Manager. Morgan Stanley, TD Securities and Truist served as Joint Active Bookrunning Managers. Kirkland & Ellis LLP advised Switch, and Latham & Watkins LLP represented the underwriters.

    About Switch

    Switch, founded in 2000 by CEO Rob Roy, stands at the forefront as the leading data center campus designer, builder and operator. As the AI, cloud and enterprise data center experts, Switch provides the most modular, scalable and sustainable data centers to the most discerning clients. The company offers a comprehensive, future-proof portfolio ranging from highly dense liquid cooled AI to hyperscale cloud and the industry’s highest rated and most-secure enterprise data centers. To learn more, visit www.switch.com and follow Switch on  LinkedInFacebook and X

    The post Switch Raises $659 Million in Fourth Data Center ABS Offering appeared first on Switch.

    ]]>
    Tech Capital Article Featuring Jason Hoffman, Switch Chief Strategy Officer https://www.switch.com/tech-capital-article-featuring-jason-hoffman-switch-chief-strategy-officer/ Fri, 10 Oct 2025 15:25:01 +0000 https://www.switch.com/?p=35156 Executive Summary Jack Haddon’s article below explores the evolving debate over where AI inference—the process of running trained models—will ultimately reside. While large-scale data centers currently dominate AI training, inference workloads may be distributed across a spectrum ranging from AI factories to metro colocation sites, enterprise data centers, and even end-user devices. Each approach offers […]

    The post Tech Capital Article Featuring Jason Hoffman, Switch Chief Strategy Officer appeared first on Switch.

    ]]>
    Executive Summary

    Jack Haddon’s article below explores the evolving debate over where AI inference—the process of running trained models—will ultimately reside. While large-scale data centers currently dominate AI training, inference workloads may be distributed across a spectrum ranging from AI factories to metro colocation sites, enterprise data centers, and even end-user devices. Each approach offers trade-offs between latency, cost, and scalability. Some experts envision powerful, centralized infrastructures supporting asynchronous or batch inference, while others predict a migration toward edge and device-level computing for real-time, latency-sensitive tasks.

    Jason Hoffman, Chief Strategy Officer at Switch, predicts that AI will follow a familiar trajectory seen in gaming and mobile computing, where “it’s actually better, faster, cheaper, and easier to make a more powerful device than to build out infrastructure between the physics engine and the device.” He states that “people in infrastructure keep saying they’ll build dedicated inference infrastructure distributed in cities, but I can point to half a dozen historical examples… devices got more powerful, and data centers became more centralized, while the middle continued to get commoditized.” Hoffman adds that workloads must be supported wherever they best fit—whether it’s “in a big data center, on the device, or somewhere in between. Often these edge services or inference nodes will mostly be coordinating between what’s happening on the device and in big data centers.”  

    In essence, the article concludes that AI inference will not have a single home. Instead, its deployment will depend on workload type, latency needs, and sovereignty concerns—resulting in a distributed and dynamic compute landscape.

    Read the the full article below.


    Where will inference be deployed?

    The battle over where AI inference will live has begun, and the experts can’t agree. Will it be forged in sprawling gigawatt data fortresses, scattered across metro hubs, or pulled down onto the very devices in our hands?

    by Jack Haddon
    Deputy Editor, The Tech Capital

    The data centre industry has been asked a lot of billion-dollar questions as of late. But a trillion-dollar question is lurking in the background:  

    Where do we need to build data centres for AI inference at scale?  

    The breed of data centre facility that is required for training AI is now well understood: we need infrastructure that can support vertically scaled computing, large clusters of high-powered GPUs that can be liquid cooled to ensure maximum efficiency and more power to scale it all, delivering better and more powerful models. 

    But there is no blueprint for inference – or the infrastructure required for actually using a trained model.

    This presents both an opportunity and a challenge for the data centre industry. It means that there is room for several different business models to support different types of inference workloads, but it also means that meeting holistic demand will require an understanding and anticipation of emerging use cases to ensure the right infrastructure is built at the right time and in the right place.  

    In this article, we explore the different locations that AI inference compute could be deployed, why, and what data centre developers need to consider to be able to deliver.  

    The experts don’t agree. Some see inference collapsing back onto devices. Others believe hyperscale facilities will dominate. Still others point to metro colos, sovereign data centres, or hybrid setups straddling all of the above. The answer, as always, depends on who you ask and what problems they’re trying to solve.  

     “There’s really no simple rule,” says Jeff Denworth, co-founder of AI operating platform VAST Data.  

    “You’re going to have easy stuff that can run on one GPU and hard stuff that will require whole data centre-sized systems.” 

    Denworth uses the example of asking ChatGPT what time sundown is (an easy task) compared to a drug discovery use case or a deep research report where a large amount of data is analysed, and the findings returned. 

    Fortunately, the large-scale AI factories that are being built to support training workloads can also be used for inference.   

    This is encouraging, as concerns have been raised that improvements in the techniques used to train new AI models on less compute power, such as those exhibited by DeepSeek in early February 2025, mean that the multi-hundred megawatt or even gigawatt sites that are being planned may become stranded assets with no customers requiring large clusters of compute in remote locations.  

    The flexibility to support inference workloads extends the life of these facilities, making them less risky to deploy and reducing the risk priced into construction financing.  

    Many of these large AI factories are being built in remote locations, where access to large quantities of power to meet the desired IT capacity was the primary driver of their location.  

    That means that network latency between the data centre and an end user is likely higher than that of a cloud availability zone or a local colocation facility. 

    These large AI facilities have already been designed for scale and powerful compute, meaning they are best suited to asynchronous processing, or batch inference, a powerful and highly efficient method for generating predictions on a large volume of data when immediate, real-time responses are not required.  

    For example, Denworth’s drug discovery use case, which would require a significant amount of scientific research papers to be uploaded and analysed, looking for correlations that have yet to be drawn.  

    Unlike online inference (asking ChatGPT what time sunrise is), batch inference operates on data that has been collected over a period of time.  

     This approach prioritises high throughput and computational efficiency over low latency.   

     Not being time sensitive means compute resources can be used when they are most available or least expensive, significantly lowering operational costs for end-users.  

    There are also benefits for the tenants of these data centres to processing batch inference here.  

     The conventional wisdom among frontier model developers is that accessing more powerful GPUs from NVIDIA or another supplier is the best way to create better and more powerful AI.  

     While Google, AWS and Microsoft are all busy creating their own AI chips, for now, buying from NVIDIA is the go-to. To avoid falling behind in the AI race, these companies need to be securing the latest, most powerful chips that are being released on a regular basis, often with notable performance increases.  

    These chips are expensive. So rather than being used for a year and then cycled out as NVIDIA releases a new product, they can instead be transferred to support batch inference.  

    “I was speaking with NVIDIA about this the other day,” Denworth reveals. “How do we build reference architectures? Do we build one for training and one for inference? Well, we can’t, because these machines get reborn, based upon different requirements and different dynamics.” 

    Paul Roberts, Director of Technology, Strategic Accounts at AWS, is seeing this play out first-hand. 

    “We’re seeing folks now training and inferencing on the same hardware,” he explains, whether that’s NVIDIA solutions or Amazon’s custom silicon. 

    “We also have customers that are using older NVIDIA hardware, like the Hopper Platforms -they’re still using them, and they are inferencing and training with them.” 

    Robers adds that AWS are always looking at the usage of the existing compute and infrastructure in its different facilities and cycles them out as usage drops to “free up more space and power”. 

     So far, everything seems quite simple. But what about when latency does become an issue?  

    For some use cases, these large, remote AI factories will not suffice.   

    If proximity to end users is crucial for the inference application, another approach needs to be considered.  

    From the data centre to the device  

    Starting off at the other end of the spectrum to the large AI factory data centre, Switch Chief Strategy Officer Jason Hoffman draws comparisons with GPUs’ previous killer app, which happens to be latency sensitive itself: gaming.  

    “We saw attempts like Google Stadia to use infrastructure to stream games to light devices. What’s been shown time and again is that it’s actually better, faster, cheaper, and easier to make a more powerful device than to build out infrastructure between the physics engine and the device,” he explains.  

    Hoffman thinks the same thing will play out with AI.  

    “People in infrastructure keep saying they’ll build dedicated inference infrastructure distributed in cities, but I can point to half a dozen historical examples of other computer workloads that followed the same pattern: devices got more powerful, and data centres became more centralised, while the middle continued to get commoditised.”  

    Hoffman says the same happened with mobile devices. When the iPhone first came out, people thought it was an opportunity for telcos to build more services in their networks to serve these “weak” devices.  

    But what turned out to be true?  

    “For a given country, you basically need two, three, or four packet cores that are centralised and run the accounts and connections, while Apple and Samsung became some of the most valuable companies by making very powerful devices,” he says. 

    “If you have a workload that has to run in a specific location, we need to support that,” he adds. “It’s either in a big data centre, on the device, or somewhere in between. Often these “edge services” or “inference nodes” will mostly be coordinating between what’s happening on the device and in big data centres.”  

    Prem Ananthakrishnan, managing director and global software lead at Accenture, agrees – to an extent.  

    “There’s always an intent to push as much as possible to the device, but the devices aren’t there yet – that’s part of the problem,” he says.  

    “Currently, the practical “edge” where inference models can run is probably in a colocation facility in the local Metro network. As models become smarter and can run on actual edge devices, we’ll likely push capabilities even closer to the end user,”  

    But he adds that inferencing is going to be an extremely fragmented compute landscape in the long run, and the opportunity for colo providers isn’t just as a stopgap.  

    “You’ll have tiny models running on phones or laptops. Then there will be mid-sized models requiring more than what edge devices can handle, and colos may still have an opportunity to host these. The giant, context-hungry large models will eventually go to hyperscalers and neoclouds,”  

    Where is the Edge?  

    One of the firms building this middle-mile inference infrastructure is Flexential.  

    “We’re not chasing the gigawatt campuses. We are chasing these edge inference nodes that are going to have relevant enterprise use cases,” says President and COO Ryan Malloney.  

    More specifically, Flexential is focused on developing sites around 36MW, where it will allocate a portion of the data centre to an AI company or a private enterprise.  

    “We’re looking at what I’d call the “middle edge” component, where you have strong network connections,” he adds.  

    A handful of AI company customers are already asking Flexential for proximity to GPU as a Service companies. 

    This goes as far as asking to be in the same data centre, but Flexential have found offering space in a different facility within the same metro and connecting them with their inter-data centre connectivity service, with 5-10ms of latency, as an adequate compromise. 

    But as for why they need to be there, and how large this market will be in the long run, Malloney is unsure. 

    “We don’t know why,” he says. “I haven’t seen a latency-sensitive inference model yet.” 

    But someone who has is Hunter Newby, the founder of Newby Ventures.  

    Newby says some major commercial banks are looking to use inference for fraud detection by capturing keystrokes as they are input into a keyboard or mobile device. 

    This requires 3ms of round-trip latency, which current data centre infrastructure is not equipped to support outside of major metros served by internet exchange points (IXPs). 

    Newby has mapped out all of the IXPs in the US, and the data shows that there are 14 entire states without a single one, let alone major urban areas close to end-users. 

    As far as he’s concerned, proximity to these IXPs is the only way that this very low-latency real-time inference can be supported.  

    As a result, Newby is embarking on a mission alongside non-profit Connected Nation to expand the quantity of the US’s IXPs. Connected Nation has identified 125 hub communities where IXPs are needed. 

    Ground was broken on Kansas’ first carrier-neutral Internet Exchange Point (IXP) in Wichita in May 2025. 

    “Local, carrier-neutral IXPs like the one we’re building in Wichita are essential to reducing lag time and enabling the next generation of AI-powered services to operate effectively and reliably,” Newby says. 

    His vision for the AI infrastructure required to support this low-latency inference is for GPU clusters to be installed as close as possible to the IXPs, unlocking the required latency enterprise or commercial end users need for optimal performance and customer experience. 

    In less mature markets like Wichita, this isn’t necessarily an issue, but in developed markets like New York, Chicago, London or Frankfurt, power and land are at a premium, especially near the existing IXPs in the inner cities. 

    Both Robers and Dan Bathurst, the Chief Product Officer of the neocloud, Nscale, agree that proximity to end users for AI is essential.  

    “As AI adoption among consumers grows, the location of inference endpoints has become critical to both performance and cost,” Bathurst explains. 

    “Placing compute closer to users and data sources reduces latency, improves the quality of the experience, and lowers the overhead of moving data long distances.”  

    But, he acknowledges that most inferencing isn’t highly latency sensitive and can be done from regional hubs where low-cost power resides. 

    “However, for certain scenarios, the need for speed outweighs the need for cost savings. 

    “Consumer-facing services, such as speech and real-time video models, often require round-trip latencies under 100 milliseconds, which puts hard limits on how far you can be from population centres.” 

    This is something that AWS are seeing as well. 

    Roberts points to Amazon’s Rufus solution, a generative AI-powered conversational shopping assistant, as an example, stating that low-latency responses were shown to have an impact on checkout conversion.  

    In this scenario, Roberts argues that using AWS availability zones will not suffice. Local zones, which bring workloads even closer to end users, need to be employed as well. 

    Are tier 1 markets ready for this? 

    This focus on low-latency solutions begs the question of whether tier 1 markets are prepared to absorb this type of inference demand. 

    As we’ve heard countless times, large training data centres have moved further afield partly due to legacy data centre hubs being heavily power-constrained, with a lack of suitable land. 

    “The value of a MW for real-time inference in London is going to be worth more than ten times the value of 1MW for training in Iowa, just based on the supply-demand imbalance,” Newby says. 

    Ben Balderi, founder of the GPU and a GPUaaS expert, adds some additional context. 

    “In the US, which has abundant land and power capacity with easier regulatory frameworks for new power generation, larger out-of-town data centres will likely continue to make sense. It’s the proven hyperscaler model, and if hyperscalers are comfortable with the latency, neo-clouds will likely be satisfied too.” 

    But Europe, including the UK, is very power-constrained. Balderi believes Europe doesn’t have the land, regulatory frameworks, or political will to build data centres in the same way. 

    “Constrained markets face well-known challenges around power and permitting, which make scaling low-latency inference problematic,” Bathurst adds. 

    Bathurst believes the industry has anticipated this and responded by focusing on density, efficiency and smarter runtime strategies. 

    “In the near term, targeted pockets of metro capacity will cover many inference workloads, particularly when paired with efficient serving stacks and dedicated server endpoints for critical tasks,” he says. 

    “However, this won’t necessarily hold as AI becomes deeply embedded in both consumer and enterprise applications and multimodal capabilities like video and speech generation mature, leading to the demand for low-latency inference expanding in metro areas.”  

    If data centre density improvements and renewable investments don’t keep pace with this demand, some popular data centre hubs could face real pressure.  

    Bathurst advises the industry to balance the need for large-scale hubs for efficiency and economic benefit, while reserving metro capacity to meet latency-sensitive requirements.  

    “This dual strategy helps ensure that customers can scale in a cost-effective manner while still getting the performance needed for applications where time is of the essence,” he explains. 

    Balderi sees another solution, and it comes from an unexpected source. 

    “The commercial real estate market is currently struggling as remote work has maintained its appeal after the pandemic and many office buildings are sitting half full,” he observes. 

    “It’s not massive by data centre standards, but you might find 500 kilowatts here, 300 kilowatts there, depending on the building size and location. If that power isn’t being used due to low office occupancy, and you already have the grid connection, there’s potential to monetise it.” 

    Balderi thinks that this currently unutilised power could be aggregated and used for a distributed AI inference platform.  

    With much higher rack densities than traditional workloads, space is not the issue; it’s just getting the power and the hardware close to where people are going to be using it.  

    For an enterprise, how much closer can you get than your own basement? 

    Returning to on-prem 

    This highlights another potential trend as AI inference develops: a return to enterprises hosting their own compute, either on premises or in a colocation facility.  

    Not only is there the potential to establish micro inferencing clusters in the emptier real estate in major urban centres, but cost factors and control come into the equation as well. 

    A report from the Uptime Institute published in January 2025 showed that dedicated infrastructure regions were cheaper than the cloud if utilisation rates were above 32.5%, for an NVIDIA DGX H100 hosted in a North Virginia data centre. 

    This is just one example, and GPU per hour prices have dropped significantly from cloud providers this year, but for enterprises that anticipate heavy utilisation, the incentive to own and deploy their own hardware remains. 

    Data sovereignty is important as well. Across Europe, the Middle East and APAC, an over-reliance on foreign, primarily US, tech providers is concerning AI developers and governments alike. 

    “I think you’re going to see a lot of people that want to have their data remain, ideally, on-premises,” says Kevin Wollenweber, Senior Vice President and General Manager of Data Centre, Internet, and Cloud Infrastructure at Cisco. 

    Balderi agrees that there are likely to be a not insignificant number of European SMEs that will want to avoid using a cloud environment, pointing to comments from senior Microsoft executives speaking to the French Senate, who could not guarantee customer data would not be shared with US authorities if Microsoft was asked to do so under the US Cloud Act.  

    But in practice, Wollenweber acknowledges that this is much easier said than done. 

    “The challenge is a lot of our facilities, and a lot of our enterprise customers’ facilities aren’t ready for the power and cooling requirements that we see,” he says. 

    If these challenges can be overcome, though, Wollenweber thinks a hybrid model could start to emerge. 

    “For enterprise applications closer to the datasets themselves, you’ll see more on-premises usage, and even hybrid approaches where companies use cloud resources for fine-tuning and then run inference locally within their infrastructure,” he predicts. 

    This sentiment from enterprises is not lost on Roberts, and it’s something AWS are prepared to support with its outpost solution, which enables AWS hardware to be deployed in a customer’s colocation or independent data centres.   

    “That’s going to give you super low latency because then you could deploy open-source models directly to that if you wanted to,” he explains. 

    Once again, the use cases and end-user experience are the ultimate drivers of where compute infrastructure will be deployed.  

    To some extent practical challenges like energy availability, land use, security and sovereignty will impact the decision as well. 

    “Our approach to how we’re looking at our data centres and where we put them, is always working backwards from customer demand,” Roberts summarises. 

    For data centre developers, keeping track of the technological advancements in applications, use cases and compute infrastructure will be vital to make sure they can provide the right capacity in the right place at the right time. 

    In a nutshell, more remote, larger AI factories are ideal for batch and compute-intensive inference where latency is not an issue due to cheap power and pre-existing, scaled high-density compute resources. 

    But as latency starts to become important, metro colos, cloud availability zones and smaller sites closer to end users will be required – perhaps in more quantity than the industry is prepared for today. 

    And finally, as AI capabilities grow and adoption increases, inference may move outside of neutral and cloud data centres altogether, either to end devices or on-premises facilities to enable sovereignty, control, speed and cost-reduction.

    Building for Inference may seem more familiar than training for the data centre industry. But due to these complexities, that familiarity does not mean simplicity.

    Source: TheTechCapital.com.

    The post Tech Capital Article Featuring Jason Hoffman, Switch Chief Strategy Officer appeared first on Switch.

    ]]>
    Investment Reports Interview with Rob Roy, Switch CEO & Founder https://www.switch.com/investment-reports-interview-with-rob-roy-switch-ceo-founder/ Wed, 01 Oct 2025 23:01:42 +0000 https://www.switch.com/?p=35148 Q1. Meeting the Growing Demand for AI Workloads As artificial intelligence advances, the role of the data center is transforming. What used to be a neutral “compute warehouse” is now becoming an AI factory, purpose-built to support the intensive demands of training and deploying large-scale models. Facilities designed for general IT workloads cannot meet the […]

    The post Investment Reports Interview with Rob Roy, Switch CEO & Founder appeared first on Switch.

    ]]>
    Q1. Meeting the Growing Demand for AI Workloads

    As artificial intelligence advances, the role of the data center is transforming. What used to be a neutral “compute warehouse” is now becoming an AI factory, purpose-built to support the intensive demands of training and deploying large-scale models.

    Facilities designed for general IT workloads cannot meet the electrical, thermal, and network requirements of modern AI clusters.

    Switch sees three infrastructure advancements as central to this evolution:

    1. Scalable Density with EVO

     AI systems are rapidly increasing their power needs, with GPU and accelerator roadmaps moving from racks consuming tens of kilowatts to racks requiring megawatts. Switch anticipated this change. Our EVO architecture scales from 50 kW racks up to 2,200 kW racks, aligning directly with the growth path of advanced GPU systems. This flexibility allows customers to expand capacity without disruption.

    1. Unified Data and Network Fabrics

    AI is not only about compute, it is about data. Moving, storing, and synchronizing massive training sets requires more than isolated solutions. Switch’s networking business enables a unified data fabric that integrates storage, transport, and interconnect into one high-performance system. This reduces latency, simplifies orchestration, and connects heterogeneous systems and sites without fragmenting AI pipelines.

    1. End-to-End Digital Orchestration

     The next step extends beyond the data hall itself. AI clusters must be orchestrated in concert with utility power, renewable availability, and enterprise workflows. Switch is advancing digital design and digital twin platforms that model, schedule, and optimize workloads across both digital and physical layers. By connecting data centers directly with energy systems, we ensure AI infrastructure operates as an active participant in the grid, balancing resilience, efficiency, and responsibility.

    In short, the growth of AI requires infrastructure that is flexible, unified, and integrated with the wider energy ecosystem. With EVO density, converged fabrics, and digital orchestration, Switch is building the foundations of the AI factory era.

    Q2. Building Resilient and Secure AI Infrastructure

    The United States leads in AI development, but much of the industry depends on international supply chains. At Switch, resilience is rooted in control.

    All Switch campuses are in the United States. We design, build, and operate our facilities ourselves, which means every stage of the process—from concept to daily operations—remains under our direct oversight. This reduces risk, strengthens security, and ensures consistency across our ecosystem.

    We also prioritize domestic sourcing wherever possible. Switch continuously works to manufacture critical components in the U.S. and to strengthen trusted local partnerships. This improves reliability and allows us to innovate without depending on offshore vulnerabilities.

    Resilience is never complete. We are always improving, refining, and future-proofing the infrastructure our customers depend on. By keeping campuses U.S.-based, vertically integrated, and focused on continuous improvement, Switch ensures its AI infrastructure is secure, reliable, and ready for the next era.

    Q3. The Role of Edge in a Distributed AI World

    AI models are growing larger and more resource-intensive, prompting new conversations about “edge computing.” Too often these discussions imagine small servers scattered across metro areas. That view does not reflect the reality of modern AI.

    When you use ChatGPT or another large model, you type a question, there is a pause, and tokens stream back one by one. Whether the first token takes 500 milliseconds or 5 seconds, the experience still feels instant. That is because the heavy computation is not in a metro data center. It is running in AI factories with racks drawing hundreds of kilowatts to over a megawatt under liquid cooling. The extra milliseconds of fiber travel are invisible compared to the model’s compute time.

    This is why Switch has always invested in regional exascale campuses. They are designed for the vertically scaled systems AI requires, not just for caching content. We pioneered building whole regions for density, resiliency, and integration, and that model has become the industry standard. We have built tier-IV enterprise edge sites before and we can evolve that product for the AI era.

    Edge still has a role, but not as thousands of small boxes. Upcoming GPUs and accelerators are deployed at rack level, often consuming more power than entire legacy caching sites. The edge for AI will look like regional clusters of dense racks positioned where latency truly matters. These clusters will manage real-time inference while synchronizing with larger AI factories for training and large-scale hosting.

    For Switch, this is not new. Our campuses are built for vertically scaled systems, with fabrics that connect seamlessly to regional deployments. The architecture that protects user experience will be a continuum: rack-level inference clusters at the edge where immediacy is required, synchronized with exascale factories where efficiency and resilience are maximized.

    Q4. Balancing Energy Demand and Environmental Responsibility

    The growth of AI training and inference has put a spotlight on energy use. At Switch, sustainability is not an add-on, it is a foundation. From the beginning, we designed our campuses to balance scale with responsibility, and we continue to raise that standard as AI workloads expand.

    Efficiency by Design and Innovation

    Switch pioneered many of the industry’s efficiency breakthroughs, and we continue to advance them. Our EVO architecture is liquid-cooled by design, using a closed-loop system that eliminates all water loss and improves rack-level performance. This allows us to scale from today’s high-density systems to tomorrow’s megawatt racks without wasting energy or consuming any water. Efficiency is embedded in the physical design, not added afterward.

    Strong Policy, Transparency, and Continuous Improvement

    Switch operates with direct access to renewable energy resources and has developed unique processes for power development, purchasing, and use. Sustainability is a cycle of auditing, reporting, and improving every year, never a task considered finished.

    Community and Resiliency Alignment

    Data centers are part of larger ecosystems. Switch works with utilities, regulators, and local communities to ensure projects strengthen, not strain, regional infrastructure. We invest in recycled water systems, renewable generation, and grid partnerships so that growth creates resilience for the regions where we operate.

    For Switch, sustainability and AI growth are inseparable. The intelligence built in our factories must be aligned with responsibility to people, communities, and the planet.

    Q5. The Next Decade of AI Infrastructure

    High-Power Rack Distribution

    The challenge is no longer just cooling a room of servers. It is delivering and managing power at the rack level in megawatts. Efficient distribution, liquid cooling, and closed-loop systems will define the next decade. The way racks are powered, cooled, and synchronized with the grid will matter as much as the chips inside them.

    Emerging Technologies and Trends

     Several developments could fundamentally reshape AI infrastructure in the U.S.:

    • Heterogeneous Compute: GPUs will remain central, but new accelerators, custom silicon, and eventually quantum co-processors will demand infrastructure that can host a mix of architectures side by side.
    • Energy-Aware Orchestration: Workloads will be scheduled based on real-time grid conditions, carbon intensity, and renewable availability. Clusters will flex in harmony with the power system, with safety and audit layers embedded as standard.
    • Federated and Distributed AI: Instead of siloed deployments, secure federated fabrics will allow models to learn from distributed data without moving it, reshaping networking, compliance, and governance.
    • Digital Twin Integration: Data centers will be modeled, monitored, and optimized as digital twins, enabling predictive management of energy, water, and workload flows.
    • AI Safety and Audit Layers: Just as cybersecurity became a native layer of IT, safety, alignment, and rollback will become native to AI infrastructure operations.

    For Switch, the next decade is about more than scale. It is about building infrastructure flexible enough for new architectures, intelligent enough to orchestrate around energy realities, and principled enough to embed responsibility at every layer. AI factories will continue to evolve, and Switch will continue to shape that evolution.

    Source: InvestmentsReports.co

    The post Investment Reports Interview with Rob Roy, Switch CEO & Founder appeared first on Switch.

    ]]>
    Investment Reports Interview with Jason Hoffman, Switch Chief Strategy Officer https://www.switch.com/investment-reports-interview-with-jason-hoffman-switch-chief-strategy-officer/ Wed, 01 Oct 2025 22:55:06 +0000 https://www.switch.com/?p=35146 What is Switch, and what is your role within the company? Switch is a U.S.-based data center designer, builder, and operator, with a history of more than 26 years in the industry. We specialize in large-scale data center campuses that are often built on thousands of acres and support gigawatt-level capacities. Our expertise goes beyond […]

    The post Investment Reports Interview with Jason Hoffman, Switch Chief Strategy Officer appeared first on Switch.

    ]]>
    What is Switch, and what is your role within the company?

    Switch is a U.S.-based data center designer, builder, and operator, with a history of more than 26 years in the industry. We specialize in large-scale data center campuses that are often built on thousands of acres and support gigawatt-level capacities. Our expertise goes beyond simply constructing buildings—we are focused on creating and running highly integrated and scalable infrastructure that powers some of the most advanced technology in the world.

    In terms of my role, I work with our CEO on the strategic direction of the company and its people to ensure that our long-term vision aligns with the needs of our clients. From the start, our campuses were designed to serve the most demanding workloads in the industry. That scale has become even more critical with the rise of AI, and it’s where Switch has always differentiated itself: delivering infrastructure capable of meeting the most demanding requirements.

    We know Switch has developed AI-focused data centers: how are they designed to support the full AI lifecycle, from data ingestion to large-scale inference?

    Switch has always been ahead in managing high-density workloads. Our founder, Rob Roy, pioneered designs such as isolated hot aisles with plenum roofing more than two decades ago, these are now widely adopted across the industry. These innovations allowed us to handle very high-density clusters long before AI became mainstream. For instance, when NVIDIA’s H100 GPUs were still air-cooled, we were among the first to deploy them at scale.

    More recently, we redesigned our facilities to function almost like individual “refrigerator chambers” at the rack level. This approach enables cooling capacity of more than two megawatts per rack, giving us the flexibility to host world-first deployments like liquid-cooled H100s, B100s, GB200s and GB300s. Our ability to consistently host these cutting-edge systems comes from designing infrastructure that can evolve in tandem with technological advances.

    Switch has expanded into new markets, including Austin and Atlanta. What drives these decisions on where to build next?

    Our headquarters and largest footprints are in Nevada, where we have developed unique capabilities as both a data center operator and an unregulated utility under a designation called 704B. This allows us to go beyond data center development: we also handle infrastructure like networks, substations, transmission lines, and even power generation. That integrated approach is a hallmark of Switch, because we don’t just build facilities; we build the infrastructure ecosystems around them.

    When we expand, we look for markets where we can make the same long-term investments. In Atlanta, for example, we’re partnering within a co-op region that allows us to co-develop generation capabilities alongside the data centers. Our goal isn’t to drop in a single building and leave. Instead, we want to create campuses that operate like batteries for the local grid: able to generate, distribute, and stabilize power while contributing to the community. That holistic vision helps us establish strong relationships with local and state governments wherever we go.

    You secured $20 billion in sustainable financing since 2024. How does this funding align with your strategic goals, particularly in expanding AI-focused data centers and operational efficiency?

    Capital is the oxygen of any business, and the scale of today’s buildout is truly unprecedented. To put it in perspective, the last time the U.S. saw this level of power infrastructure expansion was with the rise of air conditioning and refrigeration. What AI is driving today is of a similar scale. That’s why sustainable financing is critical—it allows us to accelerate expansion while staying true to our efficiency and sustainability goals.

    This funding ensures we can continue hosting the most advanced AI systems while building responsibly. It gives us the capacity to develop more campuses, integrate next-generation cooling and power solutions, and pursue sustainability initiatives that match the scale of growth. With this capital, it is possible to keep pace with both customer demand and our own commitment to environmental stewardship.

    Even with such financing, are there challenges capital alone cannot address?

    Yes. While capital is abundant, not all of it is stable. A significant portion of investment in the AI space is speculative. Switch focuses on the investment-grade portion of the market. Our customers are Fortune 500 companies and hyperscalers who need to build large-scale, long-term facilities as part of their core AI strategies. That stability is what allows us to make long-term infrastructure commitments.

    Another challenge is ensuring that capital translates into projects that meet the specific requirements of this rapidly evolving space. It’s not simply about pouring money into buildings. It’s about designing campuses with the density, resilience, and sustainability to support workloads that didn’t even exist five years ago. Addressing those demands requires innovation and technical expertise.

    How does liquid cooling and energy-efficient designs fit into your strategy?

    We’ve always emphasized sustainable design, and liquid cooling is a major advancement. Cooling with liquid is thousands of times more efficient than just cooling with air, especially for high-density AI systems. That said, air will always play a role, but the heavy lifting is done by liquid systems designed to operate in closed loops. This means once the system is charged, water usage is effectively zero: an important factor in reducing environmental impact.

    Beyond cooling, our collaborations with utility providers allow us to manage our energy sources. We can buy, sell, and even generate power to work towards long-term sustainability. This flexibility means we can invest in projects that align with both community needs and our operational demands.

    As AI demand grows, what challenges and opportunities do you foresee for Switch?

    One of the current technical challenges is power distribution. While liquid cooling addresses heat, the next frontier is figuring out how to deliver power efficiently at extreme densities. We’re talking about a hundredfold increase in density, where a gigawatt campus that once required over a thousand acres can now fit within just ten. Designing safe and reliable systems at that scale requires entirely new approaches to the power chain.

    To put this in perspective, imagine a university campus like UCLA occupying 800 acres. Now compress that into one-quarter of the space, yet consuming as much power as the entire city of Los Angeles.

    That’s the reality of AI clusters today: single facilities and campuses matching or exceeding the energy footprint of major cities. It’s both a huge challenge and an opportunity for Switch to lead in designing infrastructure for this new   reality.

    Looking at the bigger picture, is U.S. infrastructure ready to support such rapid AI-driven growth?

    No one is fully ‘ready’; if we were, the transition would be trivial. The real test is whether we can rise to the challenge. A useful comparison is air conditioning; over the second half of the twentieth century, cooling demand became one of the dominant forces shaping the U.S. grid, driving tens of gigawatts of new capacity to meet summer peaks. Today, AI data centers are emerging as a similarly transformative load; we are once again in the process of building on the order of tens of gigawatts of new capacity, roughly the same scale we mobilized for air conditioning, but now for AI.

    The difference is who is driving it. In the twentieth century, the world’s largest companies were in oil and gas; they had little direct stake in whether the grid could handle air conditioning. Today, the companies at the top, worth trillions, are the very ones deploying AI; that alignment means unprecedented capital, urgency, and global competition are behind this buildout. The open question is not whether it is possible—we have proven we can scale at this level before; it is whether deployments will be broadly distributed or concentrated in a few nations. That choice will shape not only infrastructure, but geopolitics.

    You have a background in both technology and finance – how has that shaped your leadership philosophy at Switch?

    I think of everything as a product. On the technical side, our campuses and facilities are products designed for customers; they must perform, scale, and evolve. On the financial side, investors are not simply funding us; they are buying financial products with clear expectations. We manage those offerings with the same discipline as our infrastructure, making sure pipelines, platforms, and service are equally strong.

    The third product is our culture. Switch is still founder-led; our founder remains directly involved in design, often hands-on. That passion permeates the company, and when we hire, we are selling our culture as much as the role itself. By treating technical products, financial products, and culture with equal importance, we unify our approach around serving people: whether they are customers, investors, or employees.

    Source: InvestmentReports.co

    The post Investment Reports Interview with Jason Hoffman, Switch Chief Strategy Officer appeared first on Switch.

    ]]>
    Switch Expands Credit Facilities, Raising $20 Billion Since 2024 https://www.switch.com/switch-expands-credit-facilities-raising-20-billion-since-2024/ Mon, 14 Jul 2025 22:59:09 +0000 https://www.switch.com/?p=34214 Capital raised through sustainable financing structures to accelerate growth, reduce borrowing costs and retire acquisition-related debt LAS VEGAS — JULY 15, 2025 — Switch, a premier provider of AI, cloud and enterprise data centers, today announced an expansion across its Borrowing Base and Revolving Credit Facilities to $10 billion. With this latest milestone, Switch has […]

    The post Switch Expands Credit Facilities, Raising $20 Billion Since 2024 appeared first on Switch.

    ]]>
    Capital raised through sustainable financing structures to accelerate growth, reduce borrowing costs and retire acquisition-related debt

    LAS VEGAS — JULY 15, 2025 — Switch, a premier provider of AI, cloud and enterprise data centers, today announced an expansion across its Borrowing Base and Revolving Credit Facilities to $10 billion. With this latest milestone, Switch has raised $20 billion since 2024 through sustainable financing structures, including sustainability-linked loans, green loans and green bonds.

    The total capital raised includes $5.2 billion in previously announced CMBS and ABS issuances, among the largest in the sector, $4.5 billion in project-level infrastructure financings and the newly upsized credit facilities. Proceeds from these transactions will support the growth of Switch’s contracted campus developments nationwide, reduce its cost of capital and retire 100% of the bank debt incurred during its 2022 take-private transaction.

    This capital foundation also supports the continued expansion of Switch’s cutting-edge product portfolio, including its latest innovation – Rob Roy’s EVO AI Factories. Purpose-built for next generation AI, hyperscale cloud and enterprise workloads, these AI factories feature a hybrid-air-and-liquid cooled design that supports extreme densities up to 2 MW per rack, all aligned with NVIDIA DGX™ and MGX roadmaps and ready for their latest systems. Construction is underway across all five Switch campuses, including Tahoe Reno and Atlanta – two of the fastest growing and strategically important AI infrastructure markets in the U.S., with phased capacity deliveries secured by long-term customer contracts.

    “As digital infrastructure becomes more critical to enabling AI and next-generation technologies, our focus remains on delivering performance and reliability at scale,” said Thomas Morton, President of Switch. “With strong visibility into contracted demand, this capital access positions us to execute with speed and efficiency, while making sure the infrastructure we deliver stands up to future demands and continues to support our customers’ evolving needs.”

    “Our capital strategy is centered on aligning long-term customer commitments with efficient, scalable funding,” added Madonna Park, Chief Financial Officer of Switch. “This expanded access to capital allows us to execute on secured developments and also provides us the flexibility and liquidity to best position us for continued growth.”

    “We’re pleased to have continued strong support from both new and long-standing capital partners,” said Jesse Burros, Chief Investment Officer of Switch. “The oversubscription of our most recent financing reflects institutional confidence in our platform, driven by a fully contracted pipeline and disciplined execution.”

    Backed by a broad and collaborative syndicate of leading financial institutions, the recent upsize of the Corporate Revolving Credit Facility was led by TD Securities and J.P. Morgan, who also served as Joint Lead Arrangers and Joint Bookrunners. ING serves as Sustainability Coordinator and TD Securities serves as Administrative Agent.

    The secured Borrowing Base Facility was led by J.P. Morgan and TD Securities, who served as Co-structuring Agents, Joint Lead Arrangers and Joint Bookrunners. ING and TD Securities serve as Co-sustainability Coordinators and TD Securities also serves as Administrative Agent.

    Across the other financing programs, the 2024 ABS issuances were supported by Morgan Stanley and MUFG as Co-structuring Advisors. TD Securities and RBC Capital Markets, LLC served as Joint Bookrunners. Passive Bookrunners included Société Générale, Truist Securities, Scotiabank, Santander, Citizens Capital Markets, Goldman Sachs and Guggenheim. ING, NatWest Markets, Standard Chartered Bank and Zions Capital Markets acted as Co-managers.

    For the 2025 ABS issuance, Morgan Stanley and TD Securities served as Co-structuring Advisors. BMO Capital Markets, MUFG and Société Générale acted as Joint Bookrunners. Citizens Capital Markets, ING, Scotiabank, Standard Chartered Bank and Truist Securities participated as Passive Bookrunners. Co-managers included BofA Securities, BNP Paribas, CIBC Capital Markets, Mizuho, NatWest, PNC Capital Markets LLC and SMBC.

    For the CMBS offering, Citigroup Global Markets Inc., Barclays, Goldman Sachs & Co. LLC, RBC Capital Markets and Wells Fargo Securities, LLC acted as Co-lead Managers and Joint Bookrunners.

    In the project-level infrastructure financings, MUFG and SMBC served as Structuring Banks, Initial Coordinating Lead Arrangers and Joint Bookrunners. Mizuho and Société Générale also served as Initial Coordinating Lead Arrangers and Joint Bookrunners. ING, SMBC and Société Générale serve as Joint Green Loan Coordinators, reinforcing Switch’s commitment to sustainable infrastructure development. MUFG serves as Administrative Agent.

    Milbank LLP acted as legal counsel to Switch for the Corporate Revolving Credit Facility, Borrowing Base Facility and project financings. Kirkland & Ellis LLP, Milbank LLP and Simpson Thacher & Bartlett LLP advised Switch on the ABS transactions, with Simpson Thacher & Bartlett LLP also serving as Switch’s counsel for the CMBS offering.

    Paul Hastings acted as lenders’ counsel for the Corporate Revolving Credit Facility and Borrowing Base Facility. Davis Polk & Wardwell LLP acted as lenders’ counsel for the project-level infrastructure financings. Latham & Watkins represented the underwriters on the ABS transactions and for the CMBS offering, Dechert LLP served as lenders’ counsel and Orrick, Herrington & Sutcliffe LLP represented the underwriters.

    About Switch Switch, founded in 2000 by CEO Rob Roy, stands at the forefront as the leading data center campus designer, builder and operator. As the AI, cloud and enterprise data center experts, Switch provides the most modular, scalable and sustainable data centers to the most discerning clients. The company offers a comprehensive, future-proof portfolio ranging from highly dense liquid cooled AI to hyperscale cloud and the industry’s highest rated and most-secure enterprise data centers. To learn more, visit www.switch.com and follow Switch on  LinkedInFacebook and X

    The post Switch Expands Credit Facilities, Raising $20 Billion Since 2024 appeared first on Switch.

    ]]>