Zenlayer https://www.zenlayer.com/ Mon, 20 Apr 2026 17:24:48 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://www.zenlayer.com/wp-content/uploads/2024/03/Icon-Blue.svg Zenlayer https://www.zenlayer.com/ 32 32 Global biopharmaceutical leader improves infrastructure reliability https://www.zenlayer.com/blog/global-biopharmaceutical-leader-improves-infrastructure-reliability/ https://www.zenlayer.com/blog/global-biopharmaceutical-leader-improves-infrastructure-reliability/#respond Mon, 20 Apr 2026 17:24:48 +0000 https://www.zenlayer.com/?p=29632 Key points

  • Migrated and expanded a critical Shanghai deployment to eliminate single-site risk and improve operational resilience
  • Connected eight locations across China using Zenlayer Cloud Router with centralized monitoring, visibility, and control
  • Strengthened internet resiliency with integrated primary and backup connectivity

 

About the customer

Industry: Biopharmaceutical research and manufacturing
Needs: Infrastructure reliability, visibility, redundancy, and responsive in-country support
Zenlayer services used: Cloud Router, IP Transit, Edge Colocation, Remote Hands and local support

The customer is a fast-growing industrial AI startup developing machine learning solutions for manufacturing automation. As they geared up to launch their platform and expand into multiple regions, they needed a reliable, high-performance global data architecture to support large-scale data ingestion, secure storage, and seamless access for distributed data science teams across public cloud environments.

Speed, scalability, and data privacy were key to meeting their deadlines and compliance around the world. The AI startup searched for a partner that could deliver quickly while providing the resources and capabilities needed to support their platform’s long-term growth and GPU-enabled workloads.

 

The challenge: improving resilience and visibility in China

The biopharmaceutical manufacturing leader operated a single point of presence in Shanghai that supported connectivity for multiple offices and data centers across the country. A prolonged data center outage at this key location highlighted the risk of relying on a single site and the associated challenges surrounding visibility, troubleshooting, and response times.

Although the customer worked with Tier-1 global providers, those carriers relied heavily on third parties for in-country infrastructure and support. This significantly limited visibility into incidents and delayed resolution when issues occurred, prompting the customer to reassess whether traditional Tier-1-only procurement models were enough to meet their operational needs within China.

 

Strengthening the Shanghai core with local operational control

We supported the migration and expansion of the biopharmaceutical manufacturer’s Shanghai deployment into a partner-operated, enterprise-grade data center.

While the facility itself is operated by a trusted local partner, we maintain a sizable footprint in Shanghai with dedicated on-site engineering teams who can step in directly to investigate, coordinate, and resolve incidents. This minimizes the customer’s reliance on third-party escalation chains to improve responsiveness and overall operational stability.

With a stronger, locally supported core in Shanghai, the customer reduced single-site risk and mitigated a critical point of failure, gaining greater confidence in the reliability of their China infrastructure and a clean, repeatable model for future expansions.

 

Unifying regional connectivity through centralized mesh networking

To improve connectivity and operational visibility, the biopharmaceutical manufacturer leveraged Cloud Router, our mesh networking service, deployed directly on our global private backbone. This network connected their newly established Shanghai core to eight strategic locations across China, forming a centrally managed and consistent architecture across sites.

Managed through zenConsole, our self-provisioning platform, Cloud Router provides centralized monitoring, alerts, and traffic visibility to give the customer a better understanding of network performance and facilitate rapid issue responses.

Running directly on our global private backbone also unlocked capabilities that could not be achieved via traditional colocation or piecemeal multi-provider solutions, empowering the customer to manage and operate their China network more proactively.

 

Reinforcing internet resiliency with built-in redundancy

To reinforce network resiliency in Shanghai, we provisioned IP Transit in-city as the primary path with built-in redundant connectivity. We also delivered last-mile connectivity to office locations, supporting the customer’s redundancy strategy while simplifying overall deployment and operations.

By consolidating core connectivity, backup paths, and last-mile services under a single operational partner, the customer drastically reduced infrastructure management complexity and improved coordination across their deployments in China, helping ensure consistent performance while aligning with global reliability standards.

 

Looking ahead

The customer now has a repeatable, scalable model for supporting operations across China. As their footprint grows, the same SDN-based architecture can be extended to new locations while maintaining consistent performance and operational control. Zenlayer will continue supporting ongoing operations and future expansion as requirements evolve.

 

Streamline global operations with Zenlayer

We leveraged our deep regional expertise, cloud networking service, and on-the-ground operational support to help the customer modernize and stabilize its infrastructure in China.

If you’re looking to expand your company’s global footprint, improve experiences for users, or accelerate apps or services, talk to a Zenlayer solution expert today.

For the fastest service, check out zenConsole — our self-service platform that lets you deploy around the world in minutes.

 

]]>
https://www.zenlayer.com/blog/global-biopharmaceutical-leader-improves-infrastructure-reliability/feed/ 0
Zenlayer Launches Fabric Port in Singapore with Global Reach and Free Metro Connectivity https://www.zenlayer.com/blog/zenlayer-launches-fabric-port-in-singapore-with-global-reach-and-free-metro-connectivity/ Wed, 18 Mar 2026 17:23:57 +0000 https://www.zenlayer.com/?p=29466 LOS ANGELES — Oct 28, 2025 — Zenlayer, the distributed cloud for AI, today announced the launch of its Fabric Port service in Singapore, expanding its platform to simplify how enterprises connect and scale digital infrastructure globally. The Fabric Port serves as a single entry point that enables organizations to provision unlimited virtual connections to public cloud platforms, IP transit providers, internet exchanges, and data centers. From a single port in Singapore, customers gain on-demand access to Zenlayer’s ecosystem of connectivity partners, powered by a global backbone spanning six continents.

“Our goal is to remove the barriers that slow down global digital expansion,” said Joe Zhu, Founder & CEO of Zenlayer. “By including metro connectivity in Singapore at no additional charge, we are giving our customers a simpler, more cost-effective way to scale their infrastructure. Singapore is the starting point for this vision, as we continue to expand this streamlined architecture to other major interconnection hubs.”

Fabric Port also serves as a foundational component of Zenlayer’s Fabric for AI, a connectivity architecture built for distributed AI infrastructure. As organizations deploy AI agents across regions and compute environments, they require high-bandwidth, low-latency interconnection to coordinate inference traffic and minimize token transit times between compute nodes and model endpoints. Fabric Port supports port speeds of up to 400 Gbps to meet these demands and enable efficient data movement across global AI infrastructure.

Singapore ranks among the world’s most vital interconnection hubs, linking major networks across APAC, EMEA, and the Americas. As the inaugural market for Fabric Port, Singapore serves as the foundation for Zenlayer’s broader strategy to deploy this scalable entry point across major hubs worldwide.

 

 

About Zenlayer

Zenlayer is the distributed cloud for AI that powers AI experiences through distributed inference and global connectivity. Businesses utilize Zenlayer’s on-demand compute and networking to deploy and run applications at the edge. With 300+ edge nodes across 50 countries, 220+ Tbps of global bandwidth, and 10,000+ direct cloud and network connections, Zenlayer helps businesses reach 85% of the internet population within 25 ms.

For more information, visit www.zenlayer.com and connect with us on LinkedIn.

]]>
Industrial AI startup accelerates global data access through unified cloud connectivity https://www.zenlayer.com/blog/industrial-ai-startup-accelerates-global-data-access-and-cloud-connectivity/ Mon, 16 Mar 2026 21:17:35 +0000 https://www.zenlayer.com/?p=29330 Key points

  • Enabled secure, high-volume data ingestion and preparation on dedicated bare metal with GPU support for global inferencing
  • Unified access to prepared datasets through our global backbone fabric with direct connectivity to all major public clouds
  • Optimized data movement and cloud connectivity to significantly reduce egress costs and simplify global operations

 

About the customer

Industry: Industrial AI technology
Needs: Multi-cloud connectivity, cross-border connectivity, data sovereignty compliance, network diversification, vendor consolidation, cost reduction
Zenlayer services used: Bare Metal, Private Connect, Cloud Connect, IP Transit

The customer is a fast-growing industrial AI startup developing machine learning solutions for manufacturing automation. As they geared up to launch their platform and expand into multiple regions, they needed a reliable, high-performance global data architecture to support large-scale data ingestion, secure storage, and seamless access for distributed data science teams across public cloud environments.

Speed, scalability, and data privacy were key to meeting their deadlines and compliance around the world. The AI startup searched for a partner that could deliver quickly while providing the resources and capabilities needed to support their platform’s long-term growth and GPU-enabled workloads.

 

The challenge: enabling global data access under tight timelines

The customer’s industrial AI platform needed to frequently ingest and prepare massive volumes of data to make it available to data scientists around the world without exposing sensitive operational details. Their existing dataflow relied on inefficient regional replication and cloud uplinks, creating high egress costs, fragmented workflows, and limited visibility into cross-border data movement.

Additionally, they were operating under a tight launch timeline that demanded production-ready infrastructure in days, not weeks. Any delays or architectural missteps would have negatively impacted their time to market and ability to scale into new regions.

 

Futureproofing data operations via dedicated bare metal

We provisioned eight dedicated, high-performance bare metal servers alongside a scalable storage solution to support continuous data ingestion and preparation, helping the AI platform maintain strict data privacy requirements while ensuring predictable performance during high-volume ingestion.

We deployed this environment in multiple regions to support the customer’s data sovereignty needs and designed it with access to our global GPU-powered inferencing infrastructure, enabling accelerated compute for improved data quality assurance and model validation without re-architecting their platform.

 

Enabling global data access through a unified fabric and cloud connectivity

To eliminate the customer’s reliance on costly and time-consuming region-specific replication, we leveraged our global private backbone and Cloud Connect to redesign how the AI platform shares prepared datasets with distributed research teams. Data now flows through our unified fabric, keeping sensitive information off public networks while providing direct access to all major public cloud environments.

This setup gives data scientists dependable access to the same datasets from their preferred cloud environments without duplicating storage or managing complex synchronization. Cloud Connect gives the platform native cloud access while helping maintain consistent performance and centralized control over data availability.

 

Optimizing data movement and network routing to reduce cost and complexity

Using Zenlayer cloud networking services and IP transit, we restructured how the AI startup moves data between regions and into public cloud environments. Their platform’s traffic is now routed more efficiently with no unnecessary regional cloud egress and drastically reduced reliance on expensive uplinks.

By moving data directly through our global private backbone network instead of replicating regional cloud buckets, they minimized ongoing cloud costs while unlocking better visibility and control over global data flows, simplifying ongoing operations by consolidating connectivity and routing under a single platform.

 

Looking ahead

Now that the AI startup has a scalable, unified multi-cloud environment, they can seamlessly expand into additional regions using the same standardized architecture. As their global footprint and automation services grow, we’ll continue helping the platform scale efficiently while maintaining performance, data privacy, and ease of access for research teams worldwide.

 

Streamline global operations with Zenlayer

We leveraged our hyperconnected bare metal, global private backbone, and direct connections to leading public clouds to help the customer scale its industrial AI platform with confidence.

If you’re looking to expand your company’s global footprint, improve experiences for users, or accelerate apps or services, talk to a Zenlayer solution expert today.

For the fastest service, check out zenConsole — our self-service platform that lets you deploy around the world in minutes.

 

]]>
How to build a load-balanced web cluster with Terraform https://www.zenlayer.com/blog/how-to-build-a-load-balanced-web-cluster-with-terraform/ Mon, 09 Mar 2026 08:00:34 +0000 https://www.zenlayer.com/?p=29399

 

Jeff About the Author: Jeff Geiser, VP of Customer Experience

Based out of Ashburn, Virginia, he leads Zenlayer’s end to end customer experience initiatives. With over 15 years in technical leadership at UUNet, Akamai, and EdgeCast, Jeff has designed, produced, onboarded, and deployed solutions for clients such as Twitter, Disney/ESPN, and PayPal.

 

In a time when instant responses and seamless digital experiences are standard, any business that delivers an online service must ensure performance and reliability everywhere users are.

Whether you’re powering a global AI service, streaming video worldwide, or scaling gaming infrastructure across regions, deployment agility matters now more than ever. Traditional manual setups can’t match increased demand because they are time-intensive, error-prone, and hard to reproduce consistently across environments.

Infrastructure as Code (IaC) rises above traditional deployments by turning your infrastructure into versioned, repeatable configuration files. Instead of clicking through a console or manually typing numerous commands, IaC lets you declare networking, compute, and security resources once and then deploy them anywhere in the world with consistency and confidence.

It unlocks traceability and automation that match the speed of modern application development to accelerate delivery, improve reliability, and reduce operational risks for teams managing distributed workloads. On a hyperconnected platform like Zenlayer, with presence in hundreds of cities and edge locations, IaC gives you the ability to scale and adapt infrastructure globally with a clean, predictable blueprint.

In Part 1 of this series, we will automate the deployment of a high-availability web cluster on Zenlayer Elastic Compute (ZEC) using Terraform. By the end of this guide, you will have a fully provisioned Virtual Private Cloud (VPC), secure compute nodes, and a public Load Balancer distributing traffic—all deployed with a single command.

 

Architecture

We are building a clean “Greenfield” environment in the Los Angeles (na-west-1) region.

  • Network: A dedicated VPC (10.0.0.0/16) and subnet
  • Compute: Two Ubuntu 20.04 instances using the z2a.cpu.1 standard type
  • Security: A “Deny All” firewall policy that strictly permits only SSH (22) and HTTP (80) traffic
  • Traffic Management: A TCP Load Balancer acting as the public entry point for our application

For this implementation, we will be using the Zenlayer Terraform Provider registry.

 

Implementation

Step 1 – Provider and Network Foundation

Every solid infrastructure starts with the network. Let’s configure the provider and define a VPC and subnet in our target region.

terraform {
required_providers {
zenlayercloud = {
source = "zenlayer/zenlayercloud"
}
}
}

provider "zenlayercloud" {}
# 1. NETWORK
resource "zenlayercloud_zec_vpc" "test_vpc" {
name = "terraform-zec-vpc"
cidr_block = "10.0.0.0/16"
}

resource "zenlayercloud_zec_subnet" "test_subnet" {
vpc_id = zenlayercloud_zec_vpc.test_vpc.id
cidr_block = "10.0.1.0/24"
name = "terraform-zec-subnet"
region_id = "na-west-1"
}

 

Step 2 – Security Groups

For Elastic Compute (ZEC), we use the rule_set resource. This allows us to define our ingress policies atomically, ensuring the security group is fully configured with the correct rules the moment it is created.

# 2. SECURITY GROUP
resource "zenlayercloud_zec_security_group" "web_sg" {
name = "terraform-zec-sg"
}

resource "zenlayercloud_zec_security_group_rule_set" "web_rules" {
security_group_id = zenlayercloud_zec_security_group.web_sg.id

# Allow SSH
ingress {
policy = "accept"
port = "22"
protocol = "tcp"
cidr_block = "0.0.0.0/0"
priority = 1
}

# Allow HTTP
ingress {
policy = "accept"
port = "80"
protocol = "tcp"
cidr_block = "0.0.0.0/0"
priority = 1
}
}

 

Step 3 – Dynamic Compute Provisioning

Hardcoding image IDs can be a bit brittle because they can change over time. Instead, I used a data source to query the Zenlayer API for the current valid ID for “Ubuntu 20.04” in our availability zone. The data source returns a list of all images that match our regex, then we grab the first result in that list in the image_id. You could hardcode a specific id in the image_id field if you prefer.

# 0. DATA SOURCES
data "zenlayercloud_zec_images" "ubuntu" {
image_name_regex = "^Ubuntu Server 20.04"
availability_zone = "na-west-1a"
}

# 3. COMPUTE
resource "zenlayercloud_zec_instance" "web_nodes" {
count = 2
instance_name = "tf-zec-node-${count.index + 1}"
instance_type = "z2a.cpu.1"
image_id = data.zenlayercloud_zec_images.ubuntu.images[0].id
subnet_id = zenlayercloud_zec_subnet.test_subnet.id
security_group_id = zenlayercloud_zec_security_group.web_sg.id
availability_zone = "na-west-1a"
password = "ZenTest!2025"
system_disk_size = 40
depends_on = [zenlayercloud_zec_security_group_rule_set.web_rules]
}

 

Step 4 – Load Balancing

Finally, we place a load balancer in front of our private instances. It distributes incoming network traffic across multiple backend servers, while a listener is a specific configuration within the load balancer that listens for connections. In our case, the listener is looking for TCP connections on port 80 – we also specified full network address translation forwarding mode and Maglev Hashing (mh) as the scheduling algorithm. A key detail here is the Listener ID. The Zenlayer Terraform provider returns a composite ID (e.g., lb-xxx:listener-yyy), but the backend attachment resource expects only the listener ID (the second part) – so the standard approach is to use Terraform’s split function to isolate the listener ID.

# 4. LOAD BALANCER
resource "zenlayercloud_zlb_instance" "web_lb" {
zlb_name = "tf-zec-lb"
region_id = "na-west-1"
vpc_id = zenlayercloud_zec_vpc.test_vpc.id
depends_on = [zenlayercloud_zec_subnet.test_subnet]
}

resource "zenlayercloud_zlb_listener" "http" {
zlb_id = zenlayercloud_zlb_instance.web_lb.id
listener_name = "http-80"
protocol = "TCP"
port = 80
health_check_enabled = true
health_check_type = "TCP"
scheduler = "mh"
kind = "FNAT"
}

resource "zenlayercloud_zlb_backend" "attach_vms" {
count = 2
zlb_id = zenlayercloud_zlb_instance.web_lb.id
listener_id = split(":", zenlayercloud_zlb_listener.http.id)[1]
backends {
instance_id = zenlayercloud_zec_instance.web_nodes[count.index].id
private_ip_address =
tolist(zenlayercloud_zec_instance.web_nodes[count.index].private_ip_addr
esses)[0]
port = 80
}
}

 

Step 5 – Outputs

We define outputs so the terminal tells us exactly what IPs we need to know after the build finishes.

# 5. OUTPUTS
output "load_balancer_ip" {
value = zenlayercloud_zlb_instance.web_lb.public_ip_addresses
}

output "vm_public_ips" {
value = zenlayercloud_zec_instance.web_nodes[*].public_ip_addresses
}

 

Outcome

Running ‘terraform plan’ and then ‘terraform apply’ results in a clean deployment of all resources:

Apply complete! Resources: 10 added, 0 changed, 0 destroyed

 

While those changes were being applied, we could see the instances being created in the console.

We now have a robust, production-ready infrastructure foundation. However, our servers are currently blank slates—they aren’t serving any content yet.

You can find the complete, ready-to-deploy Terraform blueprint for this project over on my github here: Zenlayer IaC Starter Repository

 

Coming soon

In Part 2, we will shift from infrastructure to configuration. We will use Ansible to automatically connect to these new instances, install Nginx, and deploy our web application

]]>
Leading global network monitoring provider standardizes and simplifies operations https://www.zenlayer.com/blog/global-network-monitoring-provider-standardizes-and-simplifies-operations/ Sat, 21 Feb 2026 21:58:38 +0000 https://www.zenlayer.com/?p=28863 Key points

  • Expanded monitoring capabilities across six US locations through phased migrations using managed hosting, while maintaining service continuity and accuracy
  • Simplified internet connectivity operations by using Dedicated Internet Access (DIA) delivered and managed through a single operational partner
  • Established a standardized infrastructure foundation that supports predictable scaling and streamlined operations

 

About the customer

Industry: Network monitoring
Needs: Edge compute, global connectivity, vendor consolidation, network diversification
Zenlayer services used: Managed Hosting, IP Transit (Dedicated Internet Access)

The customer is a global network monitoring service provider that helps Fortune 500 companies and other large organizations measure internet performance, availability, and connectivity quality across regions. Their platform relies on geographically distributed infrastructure to deliver accurate, location-specific insights for customers worldwide.

They had previously worked with us to support Asia-Pacific deployments across China, Tokyo, Osaka, Taipei, Hong Kong, and Kuala Lumpur, where our deep regional expertise, connectivity coverage and reliability, and ongoing operational support helped establish a strong working relationship. That prior collaboration prompted the customer to continue scaling their infrastructure with Zenlayer.

 

The challenge: simplifying multi-location operations for US monitoring sites

As the network monitoring platform scaled its US services, managing infrastructure at scale grew ever more complex. Each monitoring location needed reliable servers, consistent internet connectivity, and access to multiple carriers to ensure accurate performance measurement. Coordinating these requirements across a multitude of vendors and regions heavily strained overall operations.

In addition, all server deployments had to meet internal hardware lifecycle policies and needed to be brought online quickly without disrupting existing monitoring services. Balancing operational complexity and cost pressures became increasingly more challenging as the platform scaled.

 

Deploying and migrating monitoring infrastructure across six US locations

To support the customer’s operational scale, we migrated monitoring infrastructure across six US metropolitan locations within the same year in two phases.

We first migrated locations in Miami, Los Angeles, and Seattle to enable the customer to transition monitoring environments in stages without disruption to service continuity. Building on that momentum, we then supported migrations in Chicago, New York, and San Jose, which involved transitioning infrastructure from Equinix, the customer’s legacy provider.

As these environments required more complex routing and careful migration planning, we worked closely with the customer to define cutover timelines, validate configurations, and sequence migrations to ensure monitoring accuracy and service continuity throughout the transition.

Throughout this process, we stayed flexible and responsive to the customer’s evolving needs by adjusting our proposals and service models to align with their priorities across cost, operations, and vendor strategy.

 

Consolidating connectivity via managed hosting and Dedicated Internet Access

At each location, we provided managed hosting paired with Dedicated Internet Access (DIA) and took ownership of coordinating 7-8 DIA carriers per site, consolidating the customer’s infrastructure and connectivity through a single operational partner.

Instead of managing a large number of carrier relationships and escalation paths, the customer can now rely on us as a single point of contact for servers, ports, circuits, and internet performance. Whenever issues like packet loss, routing instability, or site-level outages occur, we investigate and resolve them end to end, significantly reducing troubleshooting time and the customer’s operational burden.

 

Optimizing infrastructure for cost, hardware standards, and ongoing operations

To ensure consistency throughout deployments, we aligned our infrastructure with the customer’s internal hardware lifecycle and performance standards across all locations. This included standardizing configurations and expanding available server options to meet technical requirements while reducing variation between monitoring sites.

Additionally, we refined our infrastructure design to improve energy and space efficiency, enabling more competitive pricing without compromising performance or hardware standards.

As a result, ongoing maintenance is simpler and more predictable, and the customer has a repeatable model for future expansion with reduced operational friction across regions.

 

Looking ahead

With six US locations live and a standardized infrastructure foundation in place, the network monitoring platform is well prepared to expand monitoring capabilities further across the globe. As they do so, we will continue to provide ongoing support as they grow into additional regions and address increasingly complex monitoring requirements around the world.

 

Streamline global operations with Zenlayer

We leveraged our deep regional expertise, standardized infrastructure, and end-to-end operational support to help the customer scale its global network monitoring platform with confidence.

If you’re looking to expand your company’s global footprint, improve experiences for users, or accelerate apps or services, talk to a Zenlayer solution expert today.

For the fastest service, check out zenConsole — our self-service platform that lets you deploy around the world in minutes.

 

]]>
Zenlayer @ PTC’26 – Introducing Unified Connection Experience https://www.zenlayer.com/blog/zenlayer-ptc26-introducing-unified-connection-experience/ Tue, 20 Jan 2026 19:57:10 +0000 https://www.zenlayer.com/?p=28549 At PTC 2026 in Honolulu, we introduced Unified Connection Experience (UCE), a new digital workflow designed to simplify and accelerate data center interconnection across a multi-provider ecosystem.

As enterprises increasingly build hybrid and distributed architectures across multiple data center platforms, interconnection has traditionally involved fragmented design processes, multiple management portals, and manual, offline coordination to complete end-to-end virtual circuits. These steps often extend deployment timelines from days to weeks.

UCE addresses this challenge by bringing design, provisioning, and activation into a single, online, self-service workflow, dramatically reducing complexity and deployment times.

 

 

Unified Connection Experience  – fast, online, self-service

With UCE, customers can now deploy hybrid data center connections in approximately 30 minutes, compared to the traditional 1–2 week provisioning cycle.

For example, connecting a data center at Global Switch to Equinix TY9 through Zenlayer previously required upfront research, manual coordination, and offline provisioning. With UCE, the process is fully digital:

  • Instant design and compatibility validation
    Customers can validate port mapping, circuit availability, and high-availability designs directly through Zenlayer and Equinix consoles.
  • Online purchase and activation via zenConsole
    Zenlayer ports and Private Connect links—such as Zenlayer @ Global Switch → Zenlayer–Equinix NNI @ SG1—can be provisioned instantly.
  • Automatic NNI token generation
    zenConsole automatically generates the required Zenlayer–Equinix Fabric NNI token, eliminating manual requests and back-and-forth coordination.
  • Seamless completion on Equinix Fabric
    Customers activate their Equinix Fabric port and create the virtual connection through the Equinix Fabric portal to complete the end-to-end path, such as Equinix TY9 → Zenlayer–Equinix NNI @ SG1.
  • On-demand bandwidth scaling
    Both Zenlayer and Equinix now support fully online bandwidth adjustments, with no offline orders required.

 

The result is a clean, digital-first interconnection experience that reduces operational friction, accelerates deployment, and minimizes configuration risk.

 

 

Why it matters for AI and distributed cloud architectures

As AI workloads and distributed cloud architectures continue to expand across regions and providers, enterprises need interconnections that operate with the same speed and agility as modern compute platforms.

UCE enables interconnection to function more like cloud infrastructure—designed, deployed, and scaled on demand.

For organizations supporting latency-sensitive AI inference, global SaaS platforms, or multi-region customer experiences, UCE delivers:

  • Faster deployment cycles
  • Greater operational consistency across providers
  • Reduced risk of configuration mismatch
  • A unified, modern provisioning experience

 

 

What’s next: Integrated Fabric View

Looking ahead, we’re collaborating with Equinix on the next phase of integration: Integrated Fabric View.

This upcoming enhancement will allow customers to:

  • View Zenlayer- and Equinix-supported points of presence directly within Equinix Fabric
  • Purchase ports for both Equinix and non-Equinix endpoints through a single interface
  • Order true end-to-end virtual circuits without switching platforms

 

Integrated Fabric View further reduces operational steps and moves the industry closer to a one-pane-of-glass interconnection experience.

 

 

Building a more unified global interconnection model

PTC has long been a forum where major shifts in global connectivity take shape. With the introduction of UCE, we’re continuing to evolve our global infrastructure to meet the needs of distributed cloud and AI-driven architectures.

UCE is available today, providing customers with an immediate upgrade in how they deploy and manage hybrid data center connections. Integrated Fabric View will follow, extending this simplicity even further.

Step by step, we’re building a more unified global interconnection model, one that enables customers to deploy anywhere, connect with confidence, and scale with ease.

]]>
Network Next cuts multiplayer latency by up to 80% with Zenlayer https://www.zenlayer.com/blog/network-next-cuts-multiplayer-latency-by-up-to-80-with-zenlayer/ Wed, 19 Nov 2025 08:00:30 +0000 https://www.zenlayer.com/?p=27935 Key points

  • Deployed low-latency infrastructure in key metros to improve gaming experience in underserved regions during a major launch
  • Improved upstream connectivity using Zenlayer IP Transit, accelerating player traffic with lower jitter and packet loss by avoiding congestion and reducing latency in challenging markets
  • Delivered targeted acceleration for only impacted players, minimizing overhead while improving match quality and responsiveness

 

About the customer

Industry: Gaming network optimization
Needs: Edge compute, cross-border networking, low latency, multiplayer traffic acceleration
Zenlayer services used: Bare Metal, IP Transit

The customer, Network Next, is a next-generation traffic acceleration platform purpose-built for multiplayer games that combines a lightweight SDK, a dynamic control plane, and a global relay network to intelligently identify and reroute players experiencing degraded connectivity, improving game performance while minimizing overhead.

SLOCLAP, an independent game studio known for high-action multiplayer titles, partnered with Network Next to resolve latency challenges ahead of a major launch.

 

The challenge: launching a latency-sensitive game across unpredictable networks

As SLOCLAP prepared for the Season 1 launch of their new multiplayer football title, REMATCH, they experienced severe latency issues affecting players across Latin America, Europe, and the Middle East. Many users were seeing latencies over 200-300+ ms, hindering real-time gameplay and user retention.

Traditional cloud and bare metal hosting solutions were either too expensive or lacked the network visibility needed to address the root of the problem. Public networks were often congested or misrouted, and even major providers couldn’t guarantee consistent performance across regions.

As their solution provider, Network Next needed global infrastructure that was both flexible and high-performance with diverse upstream connectivity and edge locations in particularly latency-sensitive markets.

 

Building infrastructure where latency is most severe

To support its real-time acceleration platform, Network Next deployed a dedicated instance of its backend and relay mesh for SLOCLAP’s game using Zenlayer Bare Metal and Elastic Compute in key locations like Istanbul, Dallas, and Los Angeles.

After benchmarking performance across multiple local providers, the game acceleration service found that Zenlayer’s infrastructure consistently delivered lower latency and more stable connections.

With instant provisioning, global edge availability, and cost-effective scalability, we helped Network Next launch quickly and position relays where they were needed to improve gameplay, particularly in underserved regions where performance issues were most prominent.

This agility helped Network Next launch on time and deliver immediate improvements in gaming performance for the REMATCH free-to-play weekend in September, all with less than a week’s notice.

 

Optimizing real-time traffic across global paths

Our ultra-low latency infrastructure was critical in improving long-haul and regional routes.

Network Next’s relay nodes deployed on our servers continuously ping one another to build a dynamic cost matrix that measures real-time performance across all paths. The platform’s control plane then uses this data to reroute player traffic only when necessary, often steering it through Zenlayer routes with lower latency, reduced jitter and packet loss, and stronger connectivity to key internet exchanges.

Leveraging our high-quality IP Transit that provides direct, resilient access to local and international carriers across our backbone and partner networks, Network Next was able to ensure stronger upstream connectivity and fewer congested handoff points, especially in regions with limited transit options.

In Turkey, as an example, this setup cut latency from 175-300 ms to approximately 50 ms by routing players to Gcore Istanbul through our Istanbul edge node. Similarly, we helped accelerate traffic between Mexico and US locations like Los Angeles and Dallas, where our network consistently outperformed other providers.

“We work with many different suppliers in many cities around the world. Consistently, we find Zenlayer bare metal and cloud is significantly lower latency for many players. Their proximity to players, particularly in underserved regions, helps us accelerate those experiencing the most serious performance issues.”

Glenn Fiedler, CEO, Network Next

Our consistent network performance across diverse geographies enables Network Next to deliver real-time improvements to players who need them most, elevating match quality and overall player experience in key markets with latency reductions of 200-300+ ms in multiple target regions.

 

Accelerating performance without burdening the rest

Traditional networking overlays often attempt to improve performance by routing all game traffic through fixed relays or predefined paths, regardless of whether players are experiencing issues. Although this method offers some improvement, it also introduces unnecessary latency for unaffected users and compounds infrastructure costs without addressing the root issue.

Network Next, instead, efficiently targets only the 10–20% of players who are actively experiencing degraded performance. This selective acceleration is enabled by a lightweight SDK that runs on both the client and server, paired with a control plane that evaluates routing decisions every 10 seconds.

Accelerated traffic is sent across relays between the client and server
Diagram courtesy of Network Next

With our massively distributed infrastructure, rapid provisioning, and reliable performance across edge locations, Network Next was able to scale targeted acceleration efficiently, minimizing costs while maximizing impact.

By accelerating only the players who needed help, Network Next reduced network overhead while improving match responsiveness, resulting in smoother, fairer multiplayer experiences worldwide with fewer complaints, less frustration, and a scalable solution ready for future deployments.

 

Looking ahead

As more game developers look to improve global multiplayer performance, Network Next is ready to expand its acceleration services to new titles and markets. We’ll continue supporting their gaming network optimization platform with rapid relay deployments and low-latency infrastructure to deliver better network performance for players who need it most.

 

Improve player experiences with Zenlayer

Zenlayer’s premium IP transit, high-performance bare metal, and diverse network blends helped Network Next significantly cut latency and improve experiences for REMATCH players around the world — and we can help you do the same.

If you’re looking to expand your company’s global footprint, improve experiences for users, or accelerate apps or services, talk to a Zenlayer solution expert today.

For the fastest service, check out zenConsole — our self-service platform that lets you deploy around the world in minutes.

 

]]>
Zenlayer Launches AI Gateway to Simplify Global Access to Large Language Models https://www.zenlayer.com/blog/zenlayer-launches-ai-gateway-to-simplify-global-access-to-large-language-models/ Tue, 28 Oct 2025 17:00:14 +0000 https://www.zenlayer.com/?p=27685

New service enables unified, low-latency, and intelligent access to AI models worldwide

 

LOS ANGELES — Oct 28, 2025 — Zenlayer, the world’s first hyperconnected cloud, today announced the launch of Zenlayer AI Gateway, an intelligent API service that allows AI developers, researchers, and enterprises to access, integrate, and manage multiple large language models* (LLMs) and AI services globally through a single interface.

As AI applications boom, developers face rising complexity integrating diverse models, APIs, and data sources across regions. Model providers vary in interfaces, latency, and cost, creating silos that slow innovation. Zenlayer AI Gateway provides a unified entry point to world-class models, image/video generation APIs, and Zenlayer’s distributed inference platform.

Built on Zenlayer’s global private network and 300+ edge nodes, it delivers ultra-low latency access to mainstream AI (ChatGPT, Claude, Sora, DeepSeek, etc.) and custom models, intelligently routing requests by location, load, and response time for optimal performance and efficiency. Mature models can go live the same day, with multi-provider aggregation ensuring high availability and seamless failover through a single Zenlayer account.

“Zenlayer AI Gateway breaks down barriers between models, regions, and providers, giving developers a seamless entry point to the world’s best AI resources.” Said Joe Zhu, Founder & CEO of Zenlayer, “It’s how we turn connectivity into intelligence and help power the future of global AI.”

Early adopters are already seeing results. An AI emotional-interaction game developer used Zenlayer AI Gateway to unify regional LLM access, connecting users to matched models in real time. Leveraging Zenlayer’s acceleration and multi-model integration, they reduced developer workload by 30%, latency by 50%, and costs by 20%, delivering faster, more inclusive AI gaming experiences worldwide.

Zenlayer AI Gateway is a powerful addition to its AI-ready service suite, now available for self-service with simple, token-based pay-as-you-go pricing. It will continue expanding support for emerging models and governance features, helping developers and businesses orchestrate complex AI more easily.

 

*Model availability and access depend on each provider’s policies, terms of use, and local regulations.

 

About Zenlayer

Zenlayer is the hyperconnected cloud that powers AI experiences through distributed inference and global connectivity. Businesses utilize Zenlayer’s on-demand compute and networking to deploy and run applications at the edge. With 300+ edge nodes across 50 countries, 180+ Tbps of global bandwidth, and 10,000+ direct cloud and network connections, Zenlayer helps businesses reach 85% of the internet population within 25 ms.

For more information, visit www.zenlayer.com and connect with us on LinkedIn.

]]>
Zenlayer Launches Distributed Inference to Power AI Deployment at Global Scale https://www.zenlayer.com/blog/zenlayer-launches-distributed-inference-to-power-ai-deployment-at-global-scale/ Thu, 09 Oct 2025 16:28:00 +0000 https://www.zenlayer.com/?p=27610

– Driving the next wave of AI innovation through high-performance inference at the edge

 

SINGAPORE — Oct. 9, 2025 – Zenlayer, the world’s first hyperconnected cloud, today announced the launch of Zenlayer Distributed Inference at Tech Week – Cloud & AI Infra Show in Singapore, a one-stop, instant-deployment platform built to power high-performance AI inference on a massive global scale.

As AI applications proliferate globally, two challenges still limit scalability: costly idle GPUs from uneven workloads that waste investment and cause unpredictable inference times, and the complexity of orchestrating models and resources across regions, creating latency gaps and inconsistent performance.

Zenlayer’s Distributed Inference directly addresses these issues, integrating Zenlayer’s globally distributed compute infrastructure with optimization techniques spanning scheduling, routing, networking, and memory management to maximize edge performance. With broad model support, ready-to-use frameworks, and real-time monitoring, it streamlines operations and accelerates deployment, making it easier than ever to scale inference globally.

“Inference is where AI delivers real value, but it’s also where efficiency and performance challenges become increasingly visible,” said Joe Zhu, Founder & CEO of Zenlayer. “By combining our hyperconnected infrastructure with distributed inference technology, we’re making it possible for AI providers and enterprises to deploy and scale models instantly, globally, and cost-effectively.”

What sets Zenlayer apart is that, instead of requiring customers to manage infrastructure or integrate low-level optimizations, it provides elastic GPU access, automated orchestration across 300+ PoPs, and a private backbone that reduces latency by up to 40%. The result is simple, scalable, real-time inference delivered closer to end users—letting organizations focus on building while Zenlayer handles global deployment complexities.

As AI continues to reshape industries, delivering real-time intelligence globally will be essential. Zenlayer Distributed Inference marks a major step toward making that a reality. Additionally, Zenlayer is expanding its portfolio of AI-ready services to unlock the full potential of AI at the edge.

 

 

About Zenlayer

Zenlayer is the hyperconnected cloud enabling high-speed, efficient, reliable data movement for AI on a globally distributed compute platform. Businesses utilize Zenlayer’s on-demand compute and networking to deploy and run applications at the edge. With 300+ edge nodes across 50 countries, 180+ Tbps of global bandwidth, and 10,000+ direct connections to network and cloud providers, Zenlayer helps businesses reach 85% of the internet population within 25 ms.

For more information, visit www.zenlayer.com and connect with us on LinkedIn.

]]>
US iGaming innovator expands virtual casinos to Latin America https://www.zenlayer.com/blog/us-igaming-innovator-expands-virtual-casinos-to-latin-america/ Wed, 08 Oct 2025 23:41:38 +0000 https://www.zenlayer.com/?p=27607 Key points

  • Delivered seamless gameplay via edge compute in Bogotá, ensuring reliable performance and safeguarding player trust
  • Accelerated market entry through dedicated account management, engineering expertise, and 24/7 support under tight deadlines
  • Unlocked easy scalability in new markets with Tier 3 data center access, robust hardware, diverse upstreams, and proven regional execution

 

About the customer

Industry: iGaming
Needs: Edge compute, regional expertise, edge data center services
Zenlayer services used: Bare Metal

The customer, a US-based startup in the iGaming industry, wanted to expand their virtual casino platform into Latin America. As part of this initiative, they needed high-performance infrastructure in Bogotá that could support the demands of online casino gaming while ensuring reliability, compliance, and scalability.

 

The challenge: expanding into new markets with tight timelines and high expectations

As the iGaming innovator was preparing to launch virtual casinos in Latin America, they faced the complexities of entering a new market with unfamiliar infrastructure under tight deadlines.

More than just servers, they needed a trusted partner with diverse upstreams, enterprise-ready facilities, and comprehensive support to guide them through deployment and ensure lasting success in the region.

 

Ensuring reliability and uptime with edge compute

To ensure application responsiveness, we provisioned edge compute at our Bogotá data center, close to the customer’s players, to power their virtual casino games.

In iGaming, where real money is at stake, reliability is non-negotiable. Even brief outages can disrupt gameplay, slow betting responses, or create user distrust in the fairness of outcomes.

Hosting compute at the edge with diverse network upstreams ensures a stable, resilient foundation that safeguards gameplay integrity, helping the customer maintain retention and trust in the iGaming industry.

 

Accelerating outcomes with dedicated support

Beyond infrastructure, we also provided the customer a dedicated team with account and project managers, solution engineers, and 24/7 support.

Expanding into a new market can be overwhelming, especially in regions where unfamiliar language, regulations, and local vendors present new challenges. Following our end-to-end solutioning philosophy, the customer had guidance from pre-deployment planning and facility tours to technical configuration and ongoing operations.

Our proactive engagement minimized friction to accelerate time-to-market, giving the customer confidence needed to expand into a new region.

 

De-risking expansion with proven local expertise

To validate their decision, the customer toured our Bogotá data center and experienced firsthand our Tier 3 facilities, robust hardware, network redundancy, and overall capabilities. This tour reassured the iGaming innovator of our ability to meet strict uptime and performance standards critical for their platform.

Equally important was our rich regional knowledge and proven ability to execute quickly in Latin America. When faced with a tight deadline, our fast response times and local expertise enabled the customer to meet their launch goal without compromising on quality or compliance.

 

Looking ahead

Leveraging Zenlayer’s reliable, high-performance infrastructure and deep regional expertise, the US-based iGaming innovator now has a trusted partner for operations throughout Latin America and beyond. Ahead, we’ll continue to support the customer as they grow their footprint, ensuring their platform scales reliably to meet evolving market demands.

 

Expand into new markets with greater confidence

We leveraged our global edge compute, deep regional expertise, and end-to-end solutioning and support to help the customer launch reliable virtual casino platforms in Latin America with confidence.

If you’re looking to expand your business’s global footprint, streamline operations, or accelerate apps or services to improve user experience, talk to a Zenlayer solution expert today!

For instant provisioning, check out zenConsole — our self-service platform that lets you deploy around the world in minutes.

 

]]>