HAProxy Technologies 2026 . All rights reserved. https://www.haproxy.com/feed en https://www.haproxy.com daily 1 https://cdn.haproxy.com/assets/our_logos/feedicon-xl.png <![CDATA[HAProxy Technologies]]> https://www.haproxy.com/feed 128 128 <![CDATA[Announcing HAProxy Fusion 2.0]]> https://www.haproxy.com/blog/announcing-haproxy-fusion-2-0 Mon, 16 Mar 2026 08:00:00 +0000 https://www.haproxy.com/blog/announcing-haproxy-fusion-2-0 ]]> Today, we announce the release of HAProxy Fusion 2.0. This release marks a generational leap for the authoritative control plane that orchestrates HAProxy Enterprise’s high-performance application delivery and security. With a combination of new headliner features, structural changes, and improvements to the performance of the underlying API, HAProxy Fusion has jumped from version 1.3 to version 2.0.

HAProxy Fusion 2.0 enables modern security management, cloud-native deployment and service discovery, and numerous enhancements to automation, access management, and scalability that will propel HAProxy One — and the innovative applications that depend on it — into a new era.

]]> New to HAProxy Fusion?

HAProxy Fusion provides full-lifecycle management, monitoring, and automation of multi-cluster, multi-cloud, and multi-team HAProxy Enterprise deployments. HAProxy Fusion combines a high-performance control plane with a modern GUI and API, enterprise administration, a comprehensive observability suite, and infrastructure integrations including AWS, Kubernetes, Consul, and Prometheus. 

Together, this flexible data plane, scalable control plane, and secure edge network form HAProxy One: the world’s fastest application delivery and security platform that is the G2 category leader in load balancing, API management, container networking, DDoS protection, and web application firewall (WAF). 

]]> ]]> To learn more, contact our team for a demonstration.

What’s new in HAProxy Fusion 2.0

This release introduces significant enhancements to security, automation, and scale, and support for HAProxy Enterprise load balancer versions 3.1 and 3.2.

]]> Upgrade to HAProxy Fusion 2.0

When you're ready to start the upgrade process, please carefully read our HAProxy Fusion upgrade documentation (customer login required).

Modern security management

HAProxy Fusion 2.0 introduces a unified “security control plane” to orchestrate the multi-layered security capabilities in HAProxy Enterprise. This architecture combines the next-gen performance of HAProxy Enterprise’s security layers — powered by threat intelligence enhanced by machine learning — with a next-gen security UX. 

This powerful combination makes it simple to implement common security patterns (such as Web App and API Protection), or add edge security to complex traffic management solutions (such as Universal Mesh and Load Balancing as a Service (LBaaS), while providing easy access to flexible building blocks and deep customization for those who need it.

Centralized security policy

HAProxy Fusion includes centralized security policy to orchestrate the multi-layered security capabilities of HAProxy Enterprise, in any environment or form factor, including: 

  • HAProxy Enterprise Bot Management Module, powered by the new Threat Detection Engine, which uses reputational and behavioral signals to accurately identify humans, verified bots (such as search engine and AI crawlers), and malicious bots; and detect and label complex and high-impact threats, including application layer DDoS attacks, brute force attacks, web scrapers, and vulnerability scanners.

  • HAProxy Enterprise WAF, powered by the Intelligent WAF Engine, which detects and mitigates application attacks, such as SQL injection, XSS, CSRF, and more.

  • HAProxy Enterprise’s security building blocks, such as the Global Profiling Engine (GPE), ACLs, CAPTCHA Module, allow-lists and deny-lists, and more.

Security Profiles

HAProxy Fusion makes centralized security policy fast and easy to deploy with “Security Profiles”. Security Profiles provide preset security policies that administrators can apply in just a few clicks to simplify configuration and secure traffic into new applications. 

HAProxy Fusion provides a default Security Profile to help administrators to get started quickly. The default Security Profile includes intelligent presets suitable for common application types. Administrators can easily create customized Security Profiles, tailored to particular use cases, that can be reused or further customized as new use cases emerge.

]]>

Security Profiles in HAProxy Fusion 2.0

]]> Threat-Response Matrix

HAProxy Fusion’s Security Profiles make it simple to create and customize full-spectrum security policies with an intuitive visual policy builder called the “Threat-Response Matrix”. Part of HAProxy Fusion’s modern web GUI, the Threat-Response Matrix enables administrators to orchestrate the multi-layered security capabilities in HAProxy Enterprise without requiring detailed knowledge of HAProxy’s configuration language or the underlying modules.

Using the Threat-Response Matrix, administrators can: 

  • combine Monitored Signals and Decisions, using a response framework based on simple thresholds and standard logical operators; 

  • view and apply a recommended Decision for each Monitored Signal (recommendations provided by HAProxy Fusion);

  • see a clear visual representation of how the Monitored Signals and Decisions are connected; 

  • see how a new Security Profile will affect real-time traffic in Learning Mode; 

  • seamlessly toggle between Learning Mode and Enforcement Mode when a Security Profile is ready for production traffic.

]]>

Threat-Response Matrix in HAProxy Fusion 2.0

]]> Enhanced service discovery

HAProxy Fusion 2.0 introduces deep support for Consul Enterprise, including partitions, namespaces, and the key-value store. Consul support now also includes the key-value store. This enhanced service discovery natively understands complex Consul and Consul Enterprise architectures.

This release adds variable and map transformers, allowing users to extract specific Consul and Kubernetes metadata and map them directly to HAProxy configuration directives. This includes Consul tags and meta key-value pairs, and Kubernetes annotations, version tags, and canary labels.

Conditional automation also allows for logic-based configuration generation. These enhancements enable true multi-tenancy, allowing HAProxy Enterprise deployments to securely manage traffic for disparate teams across complex architectures.

]]>

Kubernetes and Consul service discovery in HAProxy Fusion 2.0

]]> Native Kubernetes deployment

HAProxy Fusion 2.0 introduces the HAProxy Fusion Operator, which allows the control plane to be deployed natively inside Kubernetes clusters, as part of our broader Kubernetes solution. The HAProxy Fusion Operator deploys directly into your cluster via a manifest applied using kubectl.

The operator automates image configuration and orchestrates essential services. This fully provisions the control plane and its databases in under five minutes.

Full-lifecycle automation

HAProxy Fusion 2.0 introduces an official Terraform Provider and enhanced Ansible Playbook support specifically for managing HAProxy Fusion resources.

Administrators can now declare the desired state of their HAProxy Enterprise clusters, groups, and configurations. This enables granular configuration as code, effectively managing individual configuration objects like frontends and backends.

Zero-touch user provisioning

HAProxy Fusion 2.0 enables automatic role mapping. Administrators can configure HAProxy Fusion to read group claims from the OpenID Connect (OIDC) token and automatically assign the corresponding internal RBAC role.

This dynamically translates Identity Provider groups to HAProxy Fusion roles, automating onboarding and offboarding. This integration ensures users immediately have the correct permissions upon login.

]]>

Mapping HAProxy Fusion roles to OIDC roles in HAProxy Fusion 2.0

]]> High-performance API and enhanced GUI

The new HAProxy Fusion API v2 is re-engineered for higher performance at scale. It is designed to handle hyperscale bursts without increasing latency. The API supports order-of-magnitude larger configurations and a significantly higher number of frontends and backends.

Additionally, the user interface has been reorganized to create a more intuitive workflow. Configuration fields are now logically grouped by section into tabs. Frontend and backend templates include tabs for general properties, performance and stability, traffic management, and security and advanced settings.

Extended product lifecycle

Starting with HAProxy Fusion 2.0, every release is now a Long-Term Support (LTS) version. This provides a standardized lifecycle of two years of active support followed by six months of migration support, during which customers will be guided by our support team to upgrade their infrastructure to the latest version before the end of the support period.

This extended commitment offers the stability and predictability enterprise teams need to plan infrastructure updates on their own terms and maximize the return on investment for each deployment.

Try HAProxy Fusion 2.0

If you haven’t tried the power of HAProxy Fusion, this is the perfect time to schedule a demo with our team. We’ll talk you through the basics of how to manage, observe, and automate your HAProxy Enterprise deployment, and show you how HAProxy Fusion 2.0 takes things to the next level, with modern security management, cloud-native deployment and service discovery, full-lifecycle automation, and zero-touch user provisioning. SecOps and DevOps teams — this one’s for you!

There has never been a better time to start using HAProxy Fusion. Request a demo or visit our documentation to begin your upgrade.

]]> Announcing HAProxy Fusion 2.0 appeared first on HAProxy Technologies.]]>
<![CDATA[Streamlining your NIS2 and DORA compliance solution with HAProxy]]> https://www.haproxy.com/blog/dora-and-nis2-compliance-solution Fri, 13 Mar 2026 12:00:00 +0000 https://www.haproxy.com/blog/dora-and-nis2-compliance-solution ]]> The EU's cybersecurity mandate is in effect. Here’s how HAProxy supports your regulatory requirements.

With NIS2 and DORA now in effect, EU organizations face a fundamental shift in how they approach security. Compliance is a standard built into every layer of your environment, from your hardware and OS to your software configuration.

HAProxy alone doesn't make an organization compliant, yet it serves as a critical technical component of a strong security strategy. By providing multi-layered security at the application layer, HAProxy Enterprise helps teams meet the technical expectations of these mandates as part of their broader security infrastructure.

]]> NIS2 vs DORA: what do they require?

The European Union introduced NIS2 and DORA with a clear mission: to protect the essential services that EU citizens rely on every day. From power grids and telecommunications to local governments and banking systems, these critical sectors are increasingly targeted by ransomware, DDoS attacks, and other sophisticated threats.

Both frameworks mandate that organizations raise their cybersecurity and operational resilience standards to ensure they remain secure, reliable, and operationally stable.

NIS2: strengthening cybersecurity across essential services

NIS2 (Network and Information Security Directive 2) builds upon the original NIS Directive, introducing stricter requirements and encompassing a broader range of sectors. 

It mandates that organizations implement appropriate and proportionate technical, operational, and organizational measures to manage cybersecurity risks.

In practice, this means:

  • Conducting regular risk assessments to find and fix vulnerabilities.

  • Establishing transparent processes for incident detection and reporting within strict deadlines.

  • Ensuring business continuity and recovery capabilities.

  • Managing supply chain security and implementing safeguards such as encryption and access control.

  • Making executive management accountable for security decisions.

NIS2 applies to critical entities in industries such as energy, transport, healthcare, digital infrastructure, public administration, food production, and manufacturing.

DORA: what financial institutions must prove

While NIS2 casts a wide net, the Digital Operational Resilience Act (DORA) focuses specifically on financial institutions and their technology providers.

DORA requires a comprehensive framework for operational resilience, ensuring financial entities can withstand, respond to, and recover from all information and communication technology (ICT) related incidents. To comply, organizations must:

  • Establish a formal framework for managing information and communication technology risks.

  • Implement continuous monitoring and threat intelligence.

  • Conduct advanced digital operational resilience testing.

  • Maintain robust incident classification and reporting.

  • Oversee and audit critical third-party providers.

This regulation affects banks, insurers, investment firms, payment processors, crypto-asset providers, and the cloud or information and communication technology vendors that serve them.

What are the penalties under NIS2 and DORA?

Non-compliance penalties under NIS2 and DORA are intentionally severe to ensure high-level accountability. 

The EU has established one of the most stringent accountability frameworks with the NIS2 and DORA regulations. Non-compliance with NIS2 can result in fines reaching up to €10 million or 2% of global turnover. Similarly, DORA imposes fines of up to 2%, with the potential for daily penalties for continuous non-compliance.

Summary of NIS2 and DORA requirements

NIS2

DORA

Who it applies to

Essential and important entities in sectors such as energy, healthcare, transport, public administration, digital infrastructure, and manufacturing.

Financial institutions and ICT third-party providers, including banks, insurers, investment firms, and crypto-asset service providers.

Management liability and sanctions

• Executives (CEOs, board members) can be personally liable for gross negligence. [1]

• Possible temporary bans from management roles. 

Public disclosure of violations. 

Mandatory audits and compliance orders.

• Management must maintain operational resilience and enforce oversight of ICT vendors. 

• Authorities can publish penalty details, including the nature of the breach and responsible entities.

Enforcement focus

Builds a culture of accountability from leadership downward, making cybersecurity a core business priority.

Promotes transparency and public trust by ensuring financial stability and visibility into regulatory actions.

Bottom line

Financial penalties

• Essential entities: up to €10 million or 2% of global turnover (whichever is higher). 

• Important entities: up to €7 million or 1.4% of global turnover.[1]

• Up to 2% of global annual turnover

Daily penalties possible for ongoing non-compliance. [2]

From legalese to technical reality: the implicit mandate for application security

The NIS2 Directive and DORA define high-level outcomes rather than specific toolsets: protect critical services, manage risk, detect and report incidents, and maintain evidence of your resilience. Because these goals inherently rely on the availability and integrity of your applications, strong Layer 7 controls become a vital component of your defense strategy.

In practice, a web application firewall is one of the most effective ways to implement visible, logging-rich security at the application layer. In the HAProxy Enterprise load balancer, the HAProxy Enterprise WAF sits alongside built-in application-layer DDoS protection and the HAProxy Enterprise Bot Management Module, allowing you to address exploits, floods, and automated abuse as a critical layer in your overall security infrastructure.

How HAProxy helps meet technical expectations

Building a resilient infrastructure requires a "defense-in-depth" approach that protects services thoroughly. Since many modern attacks target the application layer, organizations must consider how to mitigate these specific threats.

]]> ]]>
  • WAF protection solution. Application-layer protection including deep request inspection, policy enforcement, and detailed logs, supporting incident detection, reporting, and virtual patching. This functionality is critical for covering the full spectrum of application-layer risks.

  • Application-layer DDoS protection solutions. Global rate limiting, surge handling, connection management, adaptive challenges, and circuit breakers help maintain availability during floods and abusive traffic patterns.

  • Bot management solutions. Fast and flexible identification and categorization of bots, including unwanted crawlers and scripted attackers, so teams can block automated abuse (including brute force attacks, web scrapers, and vulnerability scanners) while preserving traffic from humans and verified crawlers.

  • What to consider for auditability

    While the directives do not prescribe a specific product, and requirements vary by industry and nation, demonstrating operational resilience often requires evidence of robust technical controls. Capabilities that help demonstrate this resilience include:

    • WAF for exploit prevention and virtual patching.

    • Layer 7 DDoS protections for rate control and graceful degradation.

    • Bot management for detecting, classifying, and labeling a broad spectrum of complex, high-impact threats.

    • HAProxy Fusion centralizes HAProxy logs and telemetry, aggregating real-time metrics and security logs for formal reporting and audit trails.

    These capabilities provide practical coverage of application-layer risks, offering the data and visibility teams need to support their compliance assessments.

    HAProxy Enterprise combines these capabilities within a single layer in the traffic path, delivering high performance and low complexity.

    The compliance crossroads

    Organizations now face a choice between two paths.

    The traditional way: complexity through bolt-on solutions 

    Some teams opt to deploy a separate, standalone WAF appliance or cloud service. This can introduce new challenges, such as:

    • Added latency and network hops that slow down applications.

    • Integration conflicts between different vendors and architectures.

    • Higher total cost of ownership due to extra licenses, training, and monitoring tools.

    • Another potential point of failure in the data path.

    The HAProxy way: simplicity through unification 

    Instead of stacking tools, a smarter path is to build security into the platform you already trust. That’s where HAProxy Enterprise load balancer comes in.

    HAProxy Enterprise: a key component of your compliance strategy

    HAProxy Enterprise provides high-performance load balancing for TCP, UDP, QUIC, and HTTP-based applications, high availability, an API/AI gateway, Kubernetes application routing, SSL processing, DDoS protection, bot management, global rate limiting, and a next-generation WAF. 

    HAProxy Enterprise is built on the highly regarded open-source HAProxy, the most widely used software load balancer, ensuring exceptional performance, reliability, and flexibility. It enhances this core with ultra-low-latency security features and includes premier support.

    ]]> ]]> HAProxy Enterprise benefits from full-lifecycle management, monitoring, and automation (provided by HAProxy Fusion), and next-generation security layers powered by threat intelligence from HAProxy Edge and enhanced by machine learning.

    Resilience without compromise

    Security often comes with trade-offs, but not here. HAProxy Enterprise integrates advanced defenses directly into the HAProxy instance you already have in the traffic path, helping to support the technical side of your business continuity plans.

    • Zero additional latency and low resource use: The WAF operates natively inside the load balancer, scanning and blocking malicious traffic inline, without increasing latency or CPU use.

    • Seamless upgrade path: Upgrading from HAProxy Community to HAProxy Enterprise requires no new hardware or network redesigns; your existing HAProxy configurations and automation continue to work seamlessly.

    • Consistent cross-environment protection: Deploy on-premises, in containers, or across multiple clouds with the same security posture everywhere.

    The result is security at the speed of your business, providing robust technical controls that are invisible to users and simple for your teams to manage.

    ]]> The smartest move is an upgrade

    Both NIS2 and DORA demand that organizations prove their resilience. That means having the proper controls, visibility, encryption, risk management, and continuity built into the heart of your infrastructure.

    With HAProxy Enterprise, you don’t need extra tools or added complexity. High availability, built right into our name, ensures your infrastructure performs reliably under heavy loads or security events. Strengthen your security posture through the same platform you trust for traffic delivery, now enhanced with enterprise-grade protection and observability to support your compliance efforts.

    Ready to see it in action?

    Have questions about how HAProxy fits into your compliance strategy?

    Note on Compliance: HAProxy Enterprise provides robust security and observability to help organizations manage application-layer risks. However, achieving compliance with regulations such as NIS2 or DORA depends on a customer’s overall security infrastructure, operating environment, and the management of their broader risk program. The examples provided in this article are for illustrative purposes only and do not constitute a legal guarantee or a promise of compliance.

    [1] NIS2 Fines & Consequences

    [2] Final text of the Digital Operational Resilience Act (DORA)

    ]]> Streamlining your NIS2 and DORA compliance solution with HAProxy appeared first on HAProxy Technologies.]]>
    <![CDATA[Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy]]> https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp Fri, 27 Feb 2026 09:59:00 +0000 https://www.haproxy.com/blog/load-balancing-vmware-horizons-udp-and-tcp ]]> If you’ve worked with VMware Horizon (now Omnissa Horizon), you know it’s a common way for enterprise users to connect to remote desktops. But for IT engineers and DevOps teams? It’s a whole different story. Horizon’s custom protocols and complex connection requirements make load balancing a bit tricky. 

    With its recent sale to Omnissa, the technology hasn’t changed—but neither has the headache of managing it effectively. Let’s break down the problem and explain why Horizon can be such a beast to work with… and how HAProxy can help.

    What Is Omnissa Horizon?

    Horizon is a remote desktop solution that provides users with secure access to their desktops and applications from virtually anywhere. It is known for its performance, flexibility, and enterprise-level capabilities. Here’s how a typical Horizon session works:

    1. Client Authentication: The client initiates a TCP connection to the server for authentication.

    2. Server Response: The server responds with details about which backend server the client should connect to.

    3. Session Establishment: The client establishes one TCP connection and two UDP connections to the designated backend server.

    The problem? In order to maintain session integrity, all three connections must be routed to the same backend server. But Horizon’s protocol doesn’t make this easy. The custom protocol relies on a mix of TCP and UDP, which have fundamentally different characteristics, creating unique challenges for load balancing.

    Why Load Balancing Omnissa Horizon Is So Difficult

    The Multi-Connection Challenge

    Since these connections belong to the same client session, they must route to the same backend server. A single misrouted connection can disrupt the entire session. For a load balancer, this is easier said than done.

    The Problem with UDP

    UDP is stateless, which means it doesn’t maintain any session information between the client and server. This is in stark contrast to TCP, which ensures state through its connection-oriented protocol. Horizon’s use of UDP complicates things further because:

    • There’s no built-in mechanism to track sessions.

    • Load balancers can’t use traditional stateful methods to ensure all connections from a client go to the same server.

    • Maintaining session stickiness for UDP typically requires workarounds that add complexity (like an external data source).

    Traditional Load Balancing Falls Short

    Most load balancers rely on session stickiness (or affinity) to route traffic consistently. In TCP, this is often achieved with in-memory client-server mappings, such as with HAProxy's stick tables feature. However, since UDP is stateless and doesn't track sessions like TCP does, stick tables do not support UDP. Keeping everything coordinated without explicit session tracking feels like solving a puzzle without all the pieces—and that’s where the frustration starts. 

    This is why Omnissa (VMWare) suggests using their “Unified Access Gateway” (UAG) appliance to handle the connections. While this makes one problem easier, it adds another layer of cost and complexity to your network. While you may need the UAG for a more comprehensive solution for Omnissa products, it would be great if there was a simpler, cleaner, and more efficient solution.

    This leaves engineers with a critical question: How do you achieve session stickiness for a stateless protocol? This is where HAProxy offers an elegant solution.

    Enter HAProxy: A Stateless Approach to Stickiness

    HAProxy’s balance-source algorithm is the key to solving the Horizon multi-protocol challenge. This approach uses consistent hashing to achieve session stickiness without relying on stateful mechanisms like stick tables. From the documentation:

    “The source IP address is hashed and divided by the total weight of the running servers to designate which server will receive the request. This ensures that the same client IP address will always reach the same server as long as no server goes down or up.” 

    Here’s how it works:

    1. Hashing Client IP: HAProxy computes a hash of the client’s source IP address.

    2. Mapping to Backend Servers: The hash is mapped to a specific backend server in the pool.

    3. Consistency Across Connections: The same client IP will always map to the same backend server.

    This deterministic, stateless approach ensures that all connections from a client—whether TCP or UDP—are routed to the same server, preserving session integrity.

    Why Stateless Stickiness Works

    The beauty of HAProxy’s solution lies in its simplicity and efficiency—it has low overhead, works for both protocols and is tolerant to changes. Changes to the server pool may cause the connections to rebalance, but those clients will be redirected consistently as noted in the documentation:

    “If the hash result changes due to the number of running servers changing, many clients will be directed to a different server.”

    It is super efficient because there is no need for in-memory storage or synchronization between load balancers. The same algorithm works seamlessly for both TCP and UDP. 

    This stateless method doesn’t just solve the problem; it does so elegantly, reducing complexity and improving reliability.

    ]]> ]]> Implementing HAProxy for Omnissa Horizon

    While the configuration is relatively straightforward, we will need the HAProxy Enterprise UDP Module to provide UDP load balancing. This module is included in HAProxy Enterprise, which adds additional enterprise functionality and ultra-low-latency security layers on top of our open-source core.

    ]]> Implementation Overview

    So, how easy is it to implement? Just a few lines of configuration will get you what you need. You start by defining your frontend and backend, and then add the “magic”:

    1. Define Your Frontend and Backend: The frontend section handles incoming connections, while the backend defines how traffic is distributed to servers.

    2. Enable Balance Source: The balance source directive ensures that HAProxy computes a hash of the client’s IP and maps it to a backend server.

    3. Optimize Health Checks: Include the check keyword for backend servers to enable health checks. This ensures that only healthy servers receive traffic.

    4. UDP Load Balancing: The UDP module in the enterprise edition is necessary for UDP load balancing, and uses the udp-lb keyword. 

    Here’s what a basic configuration might look like for the custom “Blast” protocol:

    ]]> ]]> This setup ensures that all incoming connections—whether TCP or UDP—are mapped to the same backend server based on the client’s IP address. The hash-type consistent option minimizes disruption during server pool changes.

    This approach is elegant in its simplicity. We use minimal configuration, but we still get a solid approach to session stickiness. It is also incredibly performant, keeping memory usage and CPU demands low. Best of all, it is highly reliable, with consistent hashing ensuring stable session persistence, even when servers are added or removed.

    Refined health tracking & balancing UAG

    While the basic configuration above works well, there are a few refinements and adjustments that can be added for a more comprehensive solution. In production-grade Omnissa Horizon environments, HAProxy is typically deployed in front of Unified Access Gateways (UAGs) rather than directly in front of internal Connection Servers. 

    This architecture places HAProxy at the edge to manage incoming external traffic before it enters the DMZ, ensuring that UAGs (which act as hardened proxies for internal VDI operations) remain secure and performant. There are a few key refinements we can add for this production-ready setup:

    Synchronized health tracking

    While basic port checks verify network connectivity, they do not guarantee that the underlying Horizon application services are healthy. To solve this, use a dedicated health check backend like be_uag_https that specifically targets the /favicon.ico path. HAProxy can verify that all relevant UAG and Connection Server services are fully functional, not just that the port is open. 

    Long-lived session persistence

    Omnissa Horizon sessions are notably long-lived, with a default maximum duration of 10 hours. Standard load balancer timeouts are often too aggressive, potentially severing active virtual desktop connections during a typical workday. To ensure stability, HAProxy can be configured with extended timeout server and timeout client settings of 10 hours for all Blast and PCoIP backends. This aligns the load balancer’s persistence with the application’s session lifecycle, ensuring that even if a user is momentarily idle, their secondary protocols remain pinned to the correct UAG node.

    Edge security and SSL bridging

    For external-facing deployments, HAProxy should serve as the first line of defense using advanced security features like WAF (Web Application Firewall) and Brute Force Detection on the initial authentication endpoints. This protects the environment from credential-stuffing and application-layer attacks before they ever reach the UAG. 

    Furthermore, because UAGs require end-to-end encryption for security, HAProxy should be configured for SSL Bridging. It is important to use the same SSL certificate on both the HAProxy virtual service and the UAG nodes.

    This is crucial because the UAGs use fingerprinting for the certificate used for incoming requests, meaning the certificate presented by the HAProxy load balancer and the certificate on the UAG's outside interface must be the same to prevent certificate mismatch errors during the session handoff between the primary authentication and secondary display protocols.

    Sample configuration with UAG load balancing & advanced health tracking

    In this refined setup, the be_uag_https backend does the heavy lifting. All other backends simply "watch" its status. See the Omnissa documentation for a full list of port requirements for the different services within Unified Access Gateway.

    ]]> ]]> Understanding the track Directive and Timing

    When you use the track keyword, the secondary servers inherit the state of the target. They don’t send their own health check packets, this enables further synchronicity: If srv1 fails the favicon check, it is marked down for Blast TCP, Blast UDP, and PCoIP UDP at the exact same millisecond. 

    This prevents the "zombie session" issue. Without tracking, a user might be connected via TCP while their UDP media stream is hitting a dead server.

    This centralized tracking approach transforms your health checks from a series of fragmented probes into a unified "source of truth" for your infrastructure. By anchoring every protocol to a single HTTP health check, you eliminate the risk of partial failures. A server that appears healthy for UDP while its TCP services are actually failing can't happen, and the client's entire session remains synchronized.

    It's a configuration that's both more robust and significantly lighter on your backend resources, providing the stability required for high-performance virtual desktop environments.

    Advanced Options in HAProxy 3.0+

    HAProxy 3.0 introduced enhancements that make this approach even better. It offers more granular control over hashing, allowing you to specify the hash key (e.g., source IP or source+port). This is particularly useful for scenarios where IP addresses may overlap or when the list of servers is in a different order.

    We can also include hash-balance-factor, which will help keep any individual server from being overloaded. From the documentation:

    “Specifying a "hash-balance-factor" for a server with "hash-type consistent" enables an algorithm that prevents any one server from getting too many requests at once, even if some hash buckets receive many more requests than others. 

    [...]

    If the first-choice server is disqualified, the algorithm will choose another server based on the request hash, until a server with additional capacity is found.”

    Finally, we can adjust the hash function to be used for the hash-type consistent option. This defaults to sdbm, but there are 4 functions and an optional none if you want to manually hash it yourself. See the documentation for details on these functions.

    Sample configuration using advanced options:

    ]]> ]]> These features improve flexibility and reduce the risk of uneven traffic distribution across backend servers.

    Coordination Without Coordination

    The genius of HAProxy’s solution lies in its stateless state. By relying on consistent algorithms, it achieves an elegant solution that many would assume requires complex session tracking or external databases. This approach is not only efficient but also scalable.

    The result? A system that feels like it’s maintaining state without actually doing so. It’s like a magician revealing their trick—it’s simpler than it looks, but still impressive.

    Understanding Omnissa Horizon’s challenges is half the battle. Implementing a solution can be surprisingly straightforward with HAProxy. You can ensure reliable load balancing for even the most complex protocols by leveraging stateless stickiness through consistent hashing.

    This setup not only solves the Horizon problem but also demonstrates the power of HAProxy as a versatile tool for DevOps and IT engineers. Whether you’re managing legacy applications or cutting-edge deployments, HAProxy has the features to make your life easier.


    Frequently asked questions (FAQs)

    ]]>

    Resources

    ]]> Load balancing VMware Horizon's UDP and TCP traffic: a guide with HAProxy appeared first on HAProxy Technologies.]]>
    <![CDATA[Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF]]> https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf Fri, 27 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/securing-80000-transactions-per-second-at-infobip-with-haproxy-enterprise-waf ]]> The average cost of a security breach reached nearly $4.4 million in 2025, according to the publication Cost of Data Breach Report. To proactively address this substantial financial and security risk, Infobip, a global cloud communications platform, used HAProxy Enterprise to implement a security and uptime framework that is both highly modular and highly performant. 

    Infobip has 62 data centers spread across the globe — and operates each data center with everything it needs to run independently of others. There are no reliability dependencies between data centers, and if one or more go down, the others automatically pick up the slack. 

    The company processes enormous volumes of traffic, peaking at over 80,000 transactions per second during events such as Black Friday. These transactions went through HAProxy Enterprise with the integrated HAProxy Enterprise WAF.

    To protect its applications and meet strict customer compliance requirements, Infobip needed a Web Application Firewall (WAF). However, finding a solution that could meet their demanding technical and business needs was a significant challenge. 

    At HAProxyConf, engineers from Infobip shared the story of their search and how they ultimately found success with the next-gen HAProxy Enterprise WAF, powered by the Intelligent WAF Engine. Their journey highlights the critical need for a WAF that delivers security without compromising on performance, accuracy, or manageability. 

    ]]> The challenge: finding a scalable WAF for a global, high-performance infrastructure

    Infobip’s requirements for a WAF were stringent. Their globally distributed infrastructure, with scores of independent data centers, meant that any solution had to be scalable and easy to manage centrally. Furthermore, due to demanding client SLAs, Infobip had to keep any new latency to an almost invisible level.  

    Additional security — with no added latency? This strict requirement immediately excluded many traditional WAFs, which are often slow and inefficient.

    ]]> ]]> The team evaluated several options:

    • Cloud-based WAFs were not a good fit. Concerns included whether vendors had a presence in all of Infobip's regions and the need to classify the WAF provider as a data sub-processor, which they wanted to avoid. 

    • Hardware appliances were also ruled out. Scalability was lacking, management was a challenge, and costs were high. 

    • Virtual appliances didn’t meet Infobip’s operational approach, which runs everything possible in containers for consistency, security, and ease of management. 

    Since Infobip was already a happy user of HAProxy Enterprise for load balancing and SSL termination, they decided to put HAProxy Enterprise WAF to the test. 

    The evaluation: the Intelligent WAF Engine provides a breakthrough

    ]]> ]]> Infobip’s initial tests involved two distinct WAF engines: one based on ModSecurity and the HAProxy Advanced WAF (which has since been succeeded by the HAProxy Enterprise WAF). The results were mixed, highlighting the "WAF trade-off" with either option:

    • The Advanced WAF was extremely fast but proved too aggressive for their web portal, leading to false positives.

    • The ModSecurity WAF handled the portal well but introduced unacceptable latency on high-throughput APIs.

    Infobip needed one solution that could handle both use cases, without the trade-offs. Fortunately, during the evaluation period, HAProxy Technologies launched the next-generation HAProxy Enterprise WAF, powered by the Intelligent WAF Engine.

    This new WAF is designed to address the complexities and demands of modern application environments and the advanced threats they face — and is distinguished by its exceptional balanced accuracy, simple management, and ultra-low latency and resource usage. The Intelligent WAF Engine represents a technical breakthrough by moving beyond static lists and regex-based attack signatures to a non-signature-based detection system.

    ]]> ]]> By employing threat intelligence from HAProxy Edge’s 60+ billion daily requests, enhanced by machine learning, the Intelligent WAF Engine delivers:

    • Exceptional accuracy: A 98.5% balanced accuracy rate in an open source WAF benchmark, significantly outperforming the industry average of 90%.

    • Ultra-low latency: Under 1ms of added latency, even when handling complex traffic.

    • Simple management: Easy to set up and manage with out-of-the-box behavior suitable for most deployments.

    • 100% privacy: No external connection, and no third-party data processing.

    A notable feature of the HAProxy Enterprise WAF is the optional OWASP Core Rule Set (CRS) compatibility mode, for organizations that require OWASP CRS support for specific use cases or compliance. When enabled, this mode achieves on average 15X lower latency than the ModSecurity WAF using the OWASP CRS — even under mixed traffic conditions.

    This next-generation WAF solved Infobip's core problem, providing the ultra-low latency needed for API traffic and the exceptional accuracy required for their web portal, with an efficient and privacy-first operating model.

    The implementation: a phased, automated rollout

    Infobip had a solution to their challenging security and performance requirements in hand. Now they "just" needed to deploy it — and keep it updated — safely and securely.

    So, with their new, breakthrough solution in hand, Infobip devised a careful, automated rollout plan across all 62 of their data centers globally:

    1. Deploy in learning mode: The team first deployed HAProxy Enterprise WAF in a non-blocking learning mode. This allowed them to learn traffic patterns and fine-tune rules without impacting production. To ensure rock-solid reliability, they configured a “circuit breaker” to automatically disable the WAF if CPU usage ever spiked, choosing availability over security during the initial learning phase. (NB: No spike occurred.) 

    2. Enable protection path-by-path: Due to Infobip's use of a microservices architecture, they had the ability to enable blocking mode on an application-by-application basis. The team would analyze the WAF traffic for a specific path (e.g., /sms), ensure there were no false positives, and then switch that path to protection mode. This gave them the opportunity to monitor again in production, then move to the next application. 

    3. Automate with dynamic updates: Infobip manages all configurations centrally and deploys updates globally within 15 minutes. When a new application comes online, they simply update a map file that is automatically downloaded by HAProxy Enterprise instances, avoiding a full reload or redeployment - and the latency hiccups that would cause. This highlights the simple yet powerful setup and management framework that HAProxy Enterprise provides. 

    During Infobip’s presentation, the audience asked, “After setting up an app, do you still need much fine-tuning of WAF rules?” to which Juraj Ban replied, “No. Not anymore.”

    The result: security + performance, without compromise

    By implementing HAProxy Enterprise WAF, Infobip achieved its goal of strengthening its security posture without sacrificing performance. After the initial fine-tuning, they have experienced virtually no false positives and have met or exceeded all customer compliance requirements.

    ]]> ]]> The project was so successful that Infobip’s Chief Information Security Officer, Andro Galinović, provided a powerful endorsement:

    ]]> Infobip's story is a testament to how a modern, intelligent WAF can solve the complex security challenges of a global, high-performance platform. By choosing HAProxy Enterprise, they gained a solution that is not only fast and accurate but also flexible enough to fit seamlessly into their highly automated, container-based environment.


    ]]> Securing 80,000 transactions per second at Infobip with HAProxy Enterprise WAF appeared first on HAProxy Technologies.]]>
    <![CDATA[Omnissa Horizon alternative: how HAProxy solves UDP load balancing]]> https://www.haproxy.com/blog/omnissa-horizon-alternative Wed, 25 Feb 2026 14:00:00 +0000 https://www.haproxy.com/blog/omnissa-horizon-alternative ]]> The grace period is over. Your Horizon environment needs a new home, and your legacy load balancer isn't coming with you. You need a better Omnissa Horizon alternative.

    Omnissa's separation from Broadcom has disrupted VDI routing for many organizations, and vSphere 7's October 2025 end-of-life has made the situation more urgent. If you're planning to replace Omnissa Horizon infrastructure right now, you're facing a choice: replicate the old expensive architecture or use this forced refresh to fix what wasn't working.

    Legacy ADCs were never built for this protocol

    Omnissa Horizon runs on Blast Extreme, a UDP-heavy protocol that creates a coordination nightmare for traditional load balancers. Every user session requires three simultaneous connections: one TCP channel for authentication, plus two UDP streams for display and audio. All three must hit the same backend server, or the session dies.

    Legacy ADCs (Application Delivery Controllers) solve this with brute force: massive in-memory "coordination tables" that track every connection state. This approach was already inefficient, but in a forced migration scenario, it becomes a budget killer. You're looking at hardware refresh quotes that rival your new Omnissa licensing costs just to handle a protocol that UDP was designed for in the first place.

    There's a better approach that eliminates this architectural bottleneck entirely.

    HAProxy stateless coordination

    HAProxy solves the Blast routing challenge with consistent hashing (balance source) for TCP and UDP load balancing, a stateless algorithm that maps client IPs to backend servers deterministically.

    Here's why this matters for your migration:

    Traditional ADC

    HAProxy Enterprise

    Stores connection state in memory

    Uses pure math, no state to sync

    Requires hardware overprovisioning

    Scales horizontally on commodity infrastructure

    Cost scales with capacity

    Cost scales per HAProxy Enterprise instance

    With HAProxy, you get superior Blast performance, eliminate hardware refresh CAPEX, and free up budget to offset rising vSphere costs.

    Stateless stickiness in action

    When a Horizon client connects, HAProxy hashes the client's source IP. That hash deterministically maps to the same backend server, which means TCP auth, and both UDP streams route to the correct destination (without storing session tables).

    ]]> There is no state to replicate across HA pairs, no memory tuning for peak user counts, or licensing tiers based on "connections per second."

    Build strategically: get more than a VDI Gateway

    Migrating to HAProxy as your Omnissa Horizon alternative doesn't have to be purely defensive spending. There's a broader infrastructure problem you can solve at the same time.

    Most organizations today suffer from application delivery fragmentation. You're running legacy ADCs for VDI and web apps, separate API gateways for microservices, service mesh overlays for Kubernetes, and different tools for different clouds. 

    Each silo has its own management plane, monitoring stack, and security policy language. Troubleshooting a user complaint that spans "VDI → Kubernetes app → external API" requires logging into four different systems.

    By choosing HAProxy for your Omnissa migration, you're automatically placing the cornerstone of a Universal Mesh architecture into your infrastructure.

    What Universal Mesh means in practice

    The same HAProxy Enterprise instance handling your Blast traffic can:

    • Route north-south traffic (users → VDI pools)

    • Route east-west traffic (VDI → backend databases, internal APIs)

    • Serve as your Kubernetes Ingress Controller (containerized apps)

    • Act as your API Gateway (external partner integrations)

    All managed through HAProxy Fusion Control Plane: one UI, one config model, one observability platform.

    Migration path: tactical fix to strategic foundation

    Phase 1 (weeks 1-4): solve the immediate crisis

    • Deploy HAProxy Enterprise as your Omnissa Horizon gateway through HAProxy Fusion Control Plane

    • Configure balance source with consistent hashing for stateless UDP routing

    • Migrate user traffic off the legacy ADC

    Phase 2 (months 2-6): consolidate adjacent workloads

    • Route your web application traffic through the same HAProxy layer

    • Migrate API gateway functions to HAProxy Enterprise (you already own it)

    • Route Kubernetes traffic through HAProxy Enterprise

    Phase 3 (6-12 months): full Universal Mesh

    • Federate HAProxy Enterprise instances across clouds

    • Establish unified policy for mTLS, rate limiting, and WAF

    • Retire the last legacy ADC appliances

    By this point, you will have addressed the immediate Horizon crisis and consolidated your application delivery infrastructure. Instead of managing separate systems for VDI, API Gateway, and Kubernetes ingress, you're running a unified data plane. The operational benefit shows up in troubleshooting: when you can trace a user issue from VDI through containerized apps to external APIs in a single interface, you're solving problems in minutes instead of hours."

    This moment matters

    The Omnissa migration is forcing you to make decisions now, but the consequences of those decisions will compound for years.

    Choosing the path of least resistance (buying another expensive ADC because "it's what we know") might leave companies having this same conversation in a few years when the next vendor changeup occurs. 

    The technical complexity of the Omnissa migration is real. But the path through it doesn't have to be complicated.

    Ready to escape the ADC vendor lock-in?

    Talk to our solutions team about architecting your Omnissa environment on HAProxy Enterprise and building the foundation for a Universal Mesh that grows with you.

    ]]> Omnissa Horizon alternative: how HAProxy solves UDP load balancing appeared first on HAProxy Technologies.]]>
    <![CDATA[Don't panic: a low-risk strategy for Ingress NGINX retirement]]> https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement Thu, 19 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/low-risk-strategy-for-ingress-nginx-retirement ]]> The Ingress NGINX project is winding down. For many organizations, this means planning a migration for critical infrastructure.

    While the HAProxy Kubernetes Ingress Controller is the natural successor for these workloads, a "rip and replace" strategy isn’t always viable. You might have complex configurations, customized annotations, or deployment freezes that make a sudden switch risky.

    There's a lower-risk path: Place HAProxy in front of your existing Ingress NGINX deployment. 

    By leveraging the HAProxy One platform approach, you can bridge your legacy Ingress NGINX setup and your future infrastructure without downtime. This buys you time while adding immediate security and observability benefits.

    Taking a "shield and shift" approach

    This strategy mirrors the architecture we've previously recommended for vulnerability protection (like CitrixBleed). Deploy HAProxy Enterprise as your edge layer, sitting in front of your current Ingress NGINX controller. You wrap your existing ingress with enterprise-grade security and visibility, without touching your working NGINX configurations.

    ]]> ]]> This approach leverages a unified data plane. HAProxy Enterprise at the edge creates a protective layer that's consistent with your future HAProxy Kubernetes Ingress Controller. The HAProxy One platform uses the same high-performance engine at the edge and within Kubernetes, unlike disparate solutions that force you to maintain different configurations and skill sets.

    The security policies, rate limits, and observability metrics you configure at the edge today translate directly to your Kubernetes clusters tomorrow. No relearning. No translation. 

    1. Immediate security hardening

    Legacy software becomes a security liability over time. An HAProxy edge layer acts as a security filter. You can apply rate limiting, bot management, and enterprise WAF rules to sanitize traffic before it reaches the deprecated controller.

    2. Better visibility into your traffic

    Migration anxiety comes from blindness. HAProxy Fusion unifies the management of your external edge gateways and internal Kubernetes controllers.

    HAProxy Fusion provides a single pane of glass for all traffic flows—even those heading to your legacy Ingress NGINX controller. It allows you to visualize service dependencies and automate the routing changes required for the migration, turning a manual, error-prone switchover into a managed workflow.

    3. Migrate one service at a time

    This is the operational advantage. Once HAProxy Enterprise handles your ingress traffic, you don't need to cut everything over at once.

    Configure HAProxy Enterprise to route most traffic to your existing Ingress NGINX setup. Then carve out specific paths, domains, or services to route to a new, parallel HAProxy Kubernetes Ingress Controller deployment.

    Migrate service by service, pod by pod, or region by region. Test a new configuration in production with real traffic. If it works, great. If not, revert the routing without redeploying your cluster.

    Configuration example

    The setup is straightforward. Configure your edge HAProxy Enterprise to listen on your public IP address and forward traffic to your Ingress NGINX service's internal IP address.

    Here's a simplified routing configuration:

    ]]> ]]> Looking ahead: Gateway API support

    This architecture isn't just a stopgap. It's infrastructure that scales with you.

    ]]> ]]> As Kubernetes networking moves toward the Gateway API, a flexible edge routing layer lets you adopt new standards at your own pace. We're developing HAProxy Unified Gateway to support both Ingress and Gateway API standards—giving you a single platform that evolves with the ecosystem.

    Stabilize your environment now. Migrate on your timeline. The configuration knowledge you build today (the routing logic, security policies, and operational patterns) carries forward. You're not buying time to delay a painful migration. You're building the foundation for your next-generation infrastructure, one service at a time.

    Getting help

    You don't have to migrate alone:

    • Community Support: Join our Slack to discuss migration strategies with other users

    • Documentation: We're releasing migration tutorials and annotation mapping guides soon

    • Enterprise Support: If you need hands-on help for critical workloads, our support and sales teams can help you architect a safe transition with HAProxy Fusion and HAProxy Enterprise

    ]]> Don't panic: a low-risk strategy for Ingress NGINX retirement appeared first on HAProxy Technologies.]]>
    <![CDATA[February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service]]> https://www.haproxy.com/blog/cves-2026-quic-denial-of-service Thu, 12 Feb 2026 09:00:00 +0000 https://www.haproxy.com/blog/cves-2026-quic-denial-of-service ]]> The latest versions of HAProxy Community, HAProxy Enterprise, and HAProxy ALOHA fix two vulnerabilities in the QUIC library. These issues could allow a remote attacker to cause a denial of service. The vulnerabilities involve malformed packets that can crash the HAProxy process through an integer underflow or an infinite loop.

    If you use an affected product with the QUIC component enabled, you should update to a fixed version as soon as possible. Instructions are provided below on how to determine if your HAProxy installation is using QUIC. If you cannot yet update, you can temporarily workaround this issue by disabling the QUIC component.

    Vulnerability details

    • CVE Identifiers: CVE-2026-26080 and CVE-2026-26081

    • CVSSv3.1 Score: 7.5 (High)

    • CVSS Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

    • Reported by: Asim Viladi Oglu Manizada

    Description

    Two separate issues were found in how HAProxy processes QUIC packets:

    • Token length underflow (CVE-2026-26081): This affects versions 3.0 (ALOHA 16.5) and later. A remote, unauthenticated attacker can cause a process crash. This happens by sending a malformed QUIC Initial packet that causes an integer underflow during token validation.

    • Truncated varint loop (CVE-2026-26080): This affects versions 3.2 (ALOHA 17.0) and later. An attacker can cause a denial of service. By sending a QUIC packet with a truncated varint, the frame parser enters an infinite loop until the system watchdog terminates the process.

    Repeated attacks can  enable a lasting denial of service for your environment.

    Affected versions and remediation

    HAProxy Technologies released new versions of its products on Thursday, February 12, 2026, to patch these vulnerabilities.

    CVE-2026-26081 (Token length underflow)

    Product

    Affected version(s)

    Fixed version

    HAProxy Community / Performance Packages

    3.0 and later

    3.0.16

    3.1.14

    3.2.12

    3.3.3

    HAProxy Enterprise

    3.0 and later

    hapee-lb-3.0r1-1.0.0-351.929

    hapee-lb-3.1r1-1.0.0-355.744

    hapee-lb-3.2r1-1.0.0-365.548

    HAProxy ALOHA

    16.5 and later

    16.5.30

    17.0.18

    17.5.16

    CVE-2026-26080 (Truncated varint loop)

    Product

    Affected version(s)

    Fixed version

    HAProxy Community / Performance Packages

    3.2 and later

    3.2.12

    3.3.3

    HAProxy Enterprise

    3.2 and later

    hapee-lb-3.2r1-1.0.0-365.548

    HAProxy ALOHA

    17.0 and later

    17.0.18

    17.5.16

    Test if you’re affected

    Users of affected products can determine if the QUIC component is enabled on their HAProxy installation and whether they are affected:

    For a single installation (test a single config file):

    grep -iE "quic" /path/to/haproxy/config && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

    For multiple installations (test each config file in folder):

    grep -irE "quic" /path/to/haproxy/folder && echo "WARNING: QUIC may be enabled" || echo "QUIC not enabled"

    A response containing “QUIC may be enabled” indicates your HAProxy installation is potentially affected and you need to manually review and disable any QUIC listeners. The fastest method is by using the global keyword tune.quic.listen off (for version 3.3) or no-quic (3.2 and below).

    Update instructions

    Users of affected products should update immediately by pulling the latest image or package for their release track.

    • HAProxy Enterprise users can find update instructions in the customer portal.

    • HAProxy ALOHA users should follow the standard firmware update procedure in your documentation.

    • HAProxy Community users should compile from the latest source or update via their distribution's package manager or available images.

    ]]> Support

    If you are an HAProxy customer and have questions about this advisory or the update process, please contact our support team via the Customer Portal.

    ]]> February 2026 — CVE-2026-26080 and CVE-2026-26081: QUIC denial of service appeared first on HAProxy Technologies.]]>
    <![CDATA[Zero crashes, zero compromises: inside the HAProxy security audit]]> https://www.haproxy.com/blog/haproxy-security-audit-results Mon, 09 Feb 2026 15:00:00 +0000 https://www.haproxy.com/blog/haproxy-security-audit-results ]]> An in-depth look at the recent audit by Almond ITSEF, validating HAProxy’s architectural resilience and defining the shared responsibility of secure configuration.

    Trust is the currency of the modern web. When you are the engine behind the world’s most demanding applications, "trust" isn't a marketing slogan—it’s an engineering requirement.

    At HAProxy Technologies, we have always believed that high performance must never come at the cost of security or correctness. But believing in your own code isn’t enough. You need objective, adversarial validation. That's why we were glad to hear that ANSSI, the French cybersecurity agency, commissioned the rigorous security audit of HAProxy (performed by Almond ITSEF), which focused on code source analysis, fuzzing, and dynamic penetration testing as part of their efforts to support the security assessment of open source software.

    The results are in. After weeks of intense stress testing, code analysis, and fuzzing, the auditors reached a clear verdict: HAProxy 3.2.5 is a mature, secure product that is reliable for production.

    While we are incredibly proud of the results, we are equally grateful for the "operational findings" and the recommendations that highlight the importance of configuration in security. Here is a transparent look at what the auditors found and what it means for your infrastructure.

    Unshakeable stability: 25 days of fuzzing, zero crashes

    The most significant takeaway from the audit was the exceptional stability of the HAProxy core. The auditors didn't just review code; they hammered it.

    The team performed extensive "fuzzing" by feeding the system massive amounts of malformed, garbage, and malicious data. They primarily targeted the HAProxy network request handling and internal sockets. This testing went on for days, and in the case of internal sockets, up to 25 days.

    The result? Zero bugs. Zero crashes.

    For software that manages mission-critical traffic, handling millions of requests per second, this level of resilience is paramount. It confirms that the core logic of HAProxy is built to withstand not just standard traffic, but the chaotic and malicious noise of the open internet.

    Validating the architecture

    Beyond the stress tests, the audit validated several key architectural choices that differentiate HAProxy from other load balancers.

    Process isolation

    The report praised HAProxy’s "defense-in-depth" strategy. We isolate the privileged "master" process (which handles administrative tasks, spawns processes, and retains system capabilities) from the unprivileged "worker" process (which handles the actual untrusted network traffic). 

    By strictly separating these roles, HAProxy ensures that even if a worker were compromised by malicious traffic, the attacker would find themselves trapped in a container with zero system capabilities.

    Custom memory management

    Sometimes, we get asked why we use custom memory structures (pools) rather than standard system libraries (malloc). The answer has always been performance. Our custom allocators eliminate the locking overhead and fragmentation of general-purpose libraries, allowing for predictable, ultra-low latency.

    However, custom code often introduces risk. That is why this audit was so critical: static analysis confirmed that our custom implementation is not just faster, but robust and secure, identifying no memory corruption vulnerabilities.

    Clean code

    The auditors found zero vulnerabilities in the HAProxy source code itself. The only vulnerability identified was in a third-party dependency (mjson), which had already been patched in a subsequent update and shared with the upstream project.

    A case for shared responsibility

    No software is perfect, and no audit is complete without findings. The report highlighted risks that lie not in the software’s flaws, but in operational configuration.

    This brings us to a crucial concept: Shared Responsibility. We provide a bulletproof engine, but the user sits in the driver's seat. The audit highlighted a few areas where "default" behaviors prioritize compatibility over strict security, requiring administrators to be intentional with their config.

    We believe in transparency, so we are highlighting these operational recommendations to provide guidance, much of which experienced HAProxy users will recognize as standard configuration best practice.

    1. The ACL "bypass" myth

    The auditors noted that Access Control Lists (ACLs) based on URL paths could be bypassed using URL encoding (e.g., accessing /login by sending /log%69n). While this may appear to be a security gap, it’s actually a result of HAProxy’s commitment to transparency. As a proxy, HAProxy’s primary job is to deliver traffic exactly as it’s received. Since a backend server might technically treat /login and /log%69n as distinct resources, HAProxy doesn't normalize them by default to avoid breaking legitimate, unique application logic.

    If your backend decodes these characters and you need to enforce stricter controls, you have three main paths forward:

    1. Adopt a positive security model: Instead of trying to block "bad" paths (which are easy to alias), switch to an "Allow" list that only permits known-good URLs and blocks everything else.

    2. Manual normalization: For specific use cases, you can use the normalize-uri directive to choose which types of normalization to apply to percent-encoded characters before they hit your ACL logic (depending on the application's type and operating system).

    3. Enterprise WAF: If you prefer  "turnkey" protection, the HAProxy Enterprise WAF automatically handles this normalization, sitting in front of the logic to decode payloads safely.

    The positive security model is a standard best practice and the only safe way to deal with URLs. The fact that the auditors unknowingly adopted an unsafe approach here made us think about how to emit new warnings when detecting such bad patterns, maybe by categorizing actions. This ongoing feedback loop within the community helps us continue to improve and refine a decades-old project.

    2. Stats page access

    The report noted that the Stats page uses Basic Auth and, if not configured with TLS, sends credentials in cleartext. It also reveals the HAProxy version number by default.

    It’s important to remember that the Stats page is a legacy developer tool designed to be extremely lightweight. It isn't enabled by default, and its simplicity is a feature, not a bug. It’s meant to provide quick visibility without heavy dependencies. We appreciate the comment on the relevance of displaying the version by default. This is historical, and there's an option to hide it, but we're considering switching the default to hide it and provide an option to display it, as it can sometimes help tech teams quickly spot anomalies.

    The stats page doesn’t reveal much truly sensitive data by default, so if you want to expose your stats like many technical sites do, including haproxy.org, you can easily enable it. However, if you want to configure it to expose more information on it that you consider sensitive (e.g., IP addresses), then you should absolutely secure it

    The page doesn't natively handle advanced encryption or modern auth, so if you need to access it, follow these best practices:

    • Use a strong password for access

    • Wrap the Stats page in a secured listener that enforces TLS and rate limiting.

    • Only access the page through a secure tunnel like a VPN or SSH.

    For larger environments, HAProxy Fusion offers a more modern approach. Instead of checking individual raw stats pages, HAProxy Fusion provides a centralized, RBAC-secured control plane. This gives you high-level observability across your entire fleet.

    3. Startup stability

    The auditors identified that specific malformed configuration values (like tune.maxpollevents) could cause a segmentation fault during startup.

    While these were startup issues that did not affect live runtime traffic, the issue was identified and fixed immediately, and the fix was released the week following the preliminary report. This is the power of open source and active maintenance—issues are found and squashed rapidly.

    Power, trust, and freedom

    This audit reinforces the core pillars of our approach:

    • Power: Power is not just speed, but also the ability to withstand pressure. The exhaustive fuzzing tests prove that HAProxy is an engine built not just to run fast, but to run without disruption.

    • Trust: The fact that the auditors found zero vulnerabilities in the source code is a massive validation, but it isn't a coincidence. It is a testament to our Open Source DNA. Trust is earned through transparency, peer review, the continuous scrutiny of a global community, and professional security researchers.

    • Freedom: The "findings" regarding configuration remind us that HAProxy offers infinite flexibility. You have the freedom to configure it exactly as your infrastructure needs, but that freedom requires understanding your configuration choices.

    Conclusion: deploy with confidence

    The audit concludes that HAProxy 3.2 is "very mature" and "reliable for production".

    We are committed to maintaining these high standards. We don't claim our code is flawless (no serious developer does). But we do claim that our focus on extreme performance never compromises our secure coding practices.

    Next steps for users:

    • Upgrade: We recommend all users upgrade to the latest  HAProxy 3.2+ to benefit from the latest hardening and fixes.

    • Review: Audit your own configurations. Are you using "Deny" rules on paths? Consider switching to the standard positive security model.

    • Explore: If the complexity of manual hardening feels daunting, explore HAProxy One. It provides the same robust engine but adds the guardrails to simplify security at scale.

    ]]> Zero crashes, zero compromises: inside the HAProxy security audit appeared first on HAProxy Technologies.]]>
    <![CDATA[How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One]]> https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one Thu, 05 Feb 2026 00:00:00 +0000 https://www.haproxy.com/blog/how-dartmouth-implemented-lbaas-with-haproxy-one ]]> History is everywhere at Dartmouth College, and while the campus is steeped in tradition, its IT infrastructure can’t afford to get stuck in the past. In an institution where world-class research and undergraduate studies intersect, technology must be fast, invisible, and – above all – reliable.

    That reliability was put to the test when Dartmouth’s load balancing vendor was acquired twice in five years, as Avi Networks moved to VMware and VMware moved to Broadcom. Speaking at HAProxyConf 2025, Dartmouth infrastructure engineers Curt David Barthel and Kevin Doerr described how they began to see what they called “rising license costs without apparent value, and declining vendor support subsequent to acquisition after acquisition.”

    It was clear that they were beginning to pay more for less — and it was time for a change.

    After conducting thorough research, interviews, and demonstrations, Dartmouth settled on the best path forward: HAProxy One, the world’s fastest application delivery and security platform. 

    For Dartmouth, it wasn’t just a migration; it was an opportunity to innovate on its existing infrastructure. They leveraged the platform’s deep observability and automation to architect a custom Load Balancing as a Service (LBaaS) solution.

    Today, that platform is fully automated and self-service, making life easier for 50+ users across various departments and functions. Dartmouth’s journey serves as a technical blueprint for those hoping to make the switch from Avi to HAProxy One.

    ]]> Was history repeating itself?

    As an undergraduate at Dartmouth, you’re likely to be taught that history doesn’t repeat itself — but sometimes it rhymes. 

    Infrastructure changes were not new to the Dartmouth IT team. For roughly 20 years, the team managed its infrastructure using F5 Global and Local Traffic Managers. Later, they layered a software load balancing solution from Avi Networks on top of their F5 environment.

    However, the landscape shifted as Avi was acquired by VMware, which was subsequently acquired by Broadcom. The changes led to rising licensing costs and declining vendor support. The solution began to feel like a closed ecosystem, forcing Dartmouth into a state of vendor lock-in that limited its architectural freedom.

    ]]> ]]> Ultimately, the team identified three "deal-breakers" that made their legacy environment unsustainable:

    1. Vendor lock-in: Today’s multi-cloud and hybrid cloud environments demand a platform-agnostic infrastructure. Yet, Dartmouth’s existing software was moving in the opposite direction — becoming increasingly tied to a specific vendor's ecosystem (VMware).

    2. Rising costs & constrained scaling: The licensing model was no longer aligned with Dartmouth’s needs. Increases in traffic often triggered disproportionately high costs, while complex licensing tiers made it difficult for the team to scale or innovate creatively.

    3. Automation roadblocks: To provide true "Load Balancing as a Service," the team needed a robust, template-driven workflow. The existing API didn't support the level of deep automation and auditability required to offer users a truly self-service experience.

    Meeting new criteria

    The Dartmouth team followed a dictum from the famous UCLA basketball coach, John Wooden: “Be quick — but don’t hurry.” 

    They had established a high level of service for its users, and they wanted to maintain and also improve on that. So they set out their requirements carefully, including:

    • Comprehensive load balancing: Robust support for both L4 and L7 traffic.

    • API-first control plane: A solution that offers total data plane management through a modern, programmable interface.

    • Deep automation: Built-in features to support a GitOps-style workflow.

    • Modern orchestration: Native service discovery for Kubernetes environments.

    • Extensibility: The ability to customize and extend the platform to meet unique institutional needs.

    ]]> ]]> To find the right partner, Dartmouth conducted an extensive evaluation of top vendors where they demonstrated their products, along with customer reference interviews. HAProxy stood out for “less grandiose marketing” and the ability to run on-premises, in addition to cloud native implementation. 

    HAProxy One met every current requirement and supported future plans. The platform was found to be cost-effective and to feature excellent support. 

    "We interviewed many vendors, and HAProxy came out on top, particularly with the top-notch support model. It's beyond remarkable — it's unparalleled. Having that wealth of expertise is absolutely invaluable."

    Building Rome in a few days

    To replace their legacy environment, the Dartmouth team didn't just install new software; they engineered a robust, automated platform. 

    The deployment was centered around HAProxy Fusion Control Plane, integrating essential networking components like IP address management (IPAM), global server load balancing (GSLB), and the virtual router redundancy protocol (VRRP). To maintain consistency with their existing operations, they also implemented custom TCP and HTTP log formats using the common log format (CLF).

    The team then worked with their existing configuration manifests, in YAML format, which are sent to a Git repo to specify each user’s configuration options. This is all driven by a master Ansible playbook. 

    ]]> ]]> At the heart of this new system is a GitOps-driven workflow that makes infrastructure changes nearly invisible to the end user. The process follows a highly structured pipeline:

    1. User input: Power users submit their requirements through a simple, standardized front end.

    2. Manifest creation: These requirements are captured in YAML-formatted configuration manifests and committed to a Git repository.

    3. Automation pipeline: Each commit triggers a Jenkins pipeline that launches a master Ansible playbook.

    4. Configuration generation: Ansible uses Jinja2 templates to transform the YAML data into a valid, human-readable HAProxy configuration file.

    5. Centralized deployment: The playbook authenticates to the HAProxy Fusion Control Plane via API and pushes the configuration to HAProxy Fusion as a single, centralized update.

    6. Data plane synchronization: HAProxy Fusion then distributes and synchronizes the configuration across the entire fleet of HAProxy Enterprise data plane nodes, ensuring consistent, high-availability deployment at scale.

    This modular approach provides Dartmouth with a "plug-and-play" level of flexibility. While the team is not deploying a web application firewall (WAF) at go-live, the framework is already in place to support it. When they are ready to activate the HAProxy Enterprise WAF, the process will be streamlined. Once the initial migration is complete, adding security layers will be as simple as activating a pre-tested template.

    Observability without complexity

    A big win for the IT team was the clear separation of responsibilities. Users are granted read-only access to HAProxy Fusion, allowing them to track the status of their requests and view their specific configurations in real time. Meanwhile, the IT team retains central control over the control plane, ensuring security and stability across the entire institution.

    With every configuration change fully logged and auditable, troubleshooting has shifted from a manual "guessing game" to a data-driven process. Combined with HAProxy’s highly responsive support, Dartmouth now has a load-balancing environment that is not only faster and more cost-effective but significantly easier to manage.

    Keys to the new city

    Sometimes it’s seemingly small things that turn out to be crucial to success. What made Dartmouth’s transition to HAProxy work so well? 

    The team manages more than 1,100 load balancer manifests, all of which were confirmed and validated against the new automation framework well before “go-live.” Specific “power” users were trained to use the HAProxy Fusion GUI, preparing them in advance for system deployment. 

    The old architecture and the new one have been run side-by-side, so migration only requires a simple CNAME switch. If issues arise, users can fall back to the previous implementation, and behavior between the two systems can be easily compared in a real, “live fire” environment.

    ]]> ]]> The team cited several critical success factors, including:

    • The HAProxy Slack channel for support, with unparalleled responsiveness and a highly capable team

    • A developer team at HAProxy that is consistently available and responsive

    • Power user engagement and trust through early testing and implementation

    Every feature from the Avi environment has now been implemented on HAProxy One — and in the process, Dartmouth has been able to introduce new capabilities that didn’t exist before. The response to date has been very strong. Power users say, “This looks great. This is much better than what we used to have.”

    Ultimately, Dartmouth didn’t just swap vendors; they built a platform that puts them back in control. By prioritizing automation and architectural freedom, the team has moved past the cycle of rising costs and closed ecosystems. They now have a high-performance, self-service environment that is reliable, cost-effective, and ready to scale whenever they are.

    ]]> How Dartmouth avoided vendor lock-in and implemented LBaaS with HAProxy One appeared first on HAProxy Technologies.]]>
    <![CDATA[Properly securing OpenClaw with authentication]]> https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication Tue, 03 Feb 2026 08:24:00 +0000 https://www.haproxy.com/blog/properly-securing-openclaw-with-authentication ]]> OpenClaw (née MoltBot, née ClawdBot) is taking over the world. Everyone is spinning their own, either on a VPS, or their own Mac mini. 

    But here's the problem: OpenClaw is brand new, and its security posture is mostly unknown. Security researchers have already found thousands of publicly available instances exposing everything from credentials to private messages.

    While OpenClaw has a Gateway component — the UI and WebSocket that controls access — there are serious issues with its password/token-based authentication:

    • Until recently, you could skip authentication entirely on localhost.

    • The GET URL token authentication mechanism is questionable for such young code.

    • Trust needs to be earned, not assumed.

    In this post, we'll secure OpenClaw using a battle-tested method with HAProxy.

    The plan: implement HAProxy’s HTTP Basic Authentication

    HAProxy’s HTTP Basic Authentication is a robust method for securing access to production systems with a username/password combination. In this guide, we’ll do the following:

    1. Install HAProxy

    2. Configure HAProxy with automatic TLS, basic auth, and rate limiting

    3. Install OpenClaw and authenticate access using the basic auth credentials

    We'll cover running OpenClaw on a VPS first. In a follow-up, we'll tackle Mac mini deployments with secure remote access (think Tailscale, but entirely self-hosted). 

    We'll also add smart rate limiting: anyone who sends more than 5 unauthorized requests within 120 seconds is blocked for 1 minute. The clever part? They'll see a 401 Unauthorized instead of 429 Too Many Requests, so attackers won't even know they've been rate-limited.

    ]]> You'll need two checklist items to get started:

    1. A VPS running anywhere with Ubuntu 24.04, and a public IP address

    2. A domain/subdomain pointing DNS to the VPS public IP

    To see everything in action, visit our live demo link and experiment.

    ]]> Building it yourself

    1) Install the HAProxy image

    First, we'll install the high-performance HAProxy image:

    ]]> blog20260203-1.sh]]> We now have HAProxy 3.3 installed, with the high-performance AWS-LC library and full ACME support for automatic TLS certificates. Now, we just need to apply the configuration to make it work.

    2) Configure HAProxy

    Edit /etc/haproxy/haproxy.cfg and insert the following lines into the global section. This will set us up to use automatic TLS:

    ]]> blog20260203-2.cfg]]> Now let’s add configuration for automatic TLS using Let’s Encrypt. Edit the last line for your own domain:

    ]]> blog20260203-3.cfg]]> Next, we'll take care of the basic HAProxy configuration items. Don’t forget to change the line starting with ssl-f-use to use the correct subdomain alias from the my_files section:

    ]]> blog20260203-4.cfg]]> 3) Restart HAProxy

    Restart HAProxy to apply the updated configuration:

    ]]> blog20260203-5.sh]]> Next, edit the HAProxy systemd file to make it automatically write certificates to disk. Run the following command:

    ]]> blog20260203-6.sh]]> You're now ready to insert the following line under [Service]:

    ]]> blog20260203-7.cfg]]> Finally, reload systemd:

    ]]> blog20260203-8.sh]]> 4) Install OpenClaw and access it securely

    You're now ready to install OpenClaw:

    ]]> blog20260203-9.sh]]> That’s it! You can now run the following command:

    ]]> blog20260203-10.sh]]> This process will give you your personal access token. This is still needed for proper authentication inside OpenClaw itself.

    You can now visit https://subdomain.example.com/?token=<gateway token>. When doing this for the first time, you'll have to provide a username and password.

    You can also configure your macOS app to talk to this OpenClaw instance. Just insert the username and password directly into the Websocket URL, as shown below:

    ]]> ]]> One more thing

    Check your rate limiting occasionally to see who's knocking at your door:

    ]]> blog20260203-11.sh]]> You might be surprised how many bots are already scanning for OpenClaw instances. That 401 response is working hard. Any line item where gpc0 is higher than 5 has been limited.

    What if you accidentally lock yourself out? Simply run this command, where <key> is your IP address:bash:

    ]]> blog20260203-12.sh]]> Secure from the start

    You now have an OpenClaw instance that's actually secure, not just "hopefully secure." Here's what's protecting you:

    • Defense in depth – You're not relying on OpenClaw's young authentication code. HAProxy handles the security layer with battle-tested HTTP Basic Auth that's been protecting production systems for decades.

    • Stealth rate limiting – Attackers hitting your instance will see authentication failures, not rate limit errors. They won't know they've been blocked, which means they'll waste time and resources before giving up.

    • Automatic TLS – Let's Encrypt handles your certificates with zero manual intervention. No expired certs, no security warnings, no hassle.

    If you need more authentication methods or additional security layers, check out HAProxy Enterprise load balancer. When you’re ready to control your deployment at scale, using HAProxy Fusion for centralized management, observability, and automation.

    Stay safe and keep learning!

    ]]> Properly securing OpenClaw with authentication appeared first on HAProxy Technologies.]]>