NetFoundry https://netfoundry.io/ Identity-First™ Networking Thu, 26 Feb 2026 14:59:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://netfoundry.io/wp-content/uploads/2024/08/netfoundry-icon-color-150x150.png NetFoundry https://netfoundry.io/ 32 32 Breakneck speed without breaking our necks https://netfoundry.io/devops/devops-meets-secops/ Thu, 26 Feb 2026 14:42:40 +0000 https://netfoundry.io/?p=46637 Why Every DevOps Engineer Should Be Using Ziti Velocity vs security – why can we not have both? After working in the DevOps space for almost a decade, there are a few common traits that you’ll find among most of us: Over the last decade we’ve seen an explosion in technology that allows us to […]

The post Breakneck speed without breaking our necks appeared first on NetFoundry.

]]>
Why Every DevOps Engineer Should Be Using Ziti

Velocity vs security – why can we not have both?

After working in the DevOps space for almost a decade, there are a few common traits that you’ll find among most of us:

  1. We don’t actually know what we do for a living
  2. We love to automate EVERYTHING
  3. We wire systems together and make things work
  4. We don’t like administrating systems – we build systems to administrate other systems

Over the last decade we’ve seen an explosion in technology that allows us to automate and orchestrate on a level that’s never been seen before. We develop faster, deploy faster, we innovate. But in a world that has an ever-increasing need for security, we are also the troublemakers.

In order to wire systems together, we need access to EVERYTHING. Even more than that, we can grant incredible super powers to the automation systems that we build. We create systems that are absolute goldmines for hackers to exploit. Take any of the big technology names in the DevOps space, and imagine the devastation that an exploit can bring if one of these systems becomes compromised. Salt, Kubernetes, Jenkins, Ansible, your Data Warehouse. Something they all have in common – they all have incredible access grants within systems, and they all have access to a disturbing amount of data if it falls into the wrong hands.

Since removing access to these systems is not feasible, how do we continue to build tools that allow us to move at a breakneck speed without compromising our security and creating a massive attack vector? 

Enter Ziti! 

OpenZiti (I use Ziti and OpenZiti synonymously throughout this blog) is an open-source, software-only programmable network overlay based on zero-trust principles.

The Ziti mascot (a yellow ziti noodle) as an old west sherif and as a ninja, along with the OpenZiti logo.

As a DevOps Engineer, I like tools that act like a Swiss Army Knife. Anything that solves a variety of problems and allows me to continue to move quickly makes it into my repertoire and becomes a tool that I come back to over and over again, regardless of where I’m working. OpenZiti provides a way for you to set up access controls quickly and efficiently while raising the standard for “least permissions” at the network level. Read more about Ziti here or Ziggy, our mascot for OpenZiti and his many different outfits here.

Lock Down Your Tools

To make my life easier, I use the NetFoundry platform to provide a cloud orchestrated, programmable operation of OpenZiti – spun up and down minutes based on our needs. NetFoundry is the creator of Ziti, maintains it and can provide a SaaS option for anyone.

Locking down critical resources with Ziti, using NetFoundry-hosted overlay fabric.

If you’re reading this, I probably don’t need to explain the terrifying compromises that happened in the last year with Solarwinds or TravisCI, but just in case you didn’t, you should go read those articles now and then you’ll understand perfectly why this next topic is critical. A CI/CD system is a perfect example of a “system managing a system”. It typically has elevated access and is designed to deploy and execute code in all of the places you care about. Hackers know this, they target these systems and dream about deploying and executing their own code inside of your systems.Knowing that hackers are looking for endpoints just like these would often keep me up at night. We can’t afford to leave our critical systems exposed anymore, we need to make them dark. The methodology for securing any of your sensitive resources is the same whether it’s a data warehouse, a build server, or an API. 

  • Shut down your traditional ingress ports
  • Place an Edge Router (cloud orchestrated software) inside of the private network space
  • Grant service access to only the endpoints that need it 

Seamless Switchover

When I think about “locking things down”,this is typically synonymous to all kinds of breakage and end-user disruption. When doing this sort of work in the past it’s very difficult to validate success short of waiting for users to complain. However, NetFoundry’s console paired with Ziti makes this incredibly easy.  I could see if the project was going to be successful *before* making the cutover. Ziti’s traffic intercept will work whether a service is dark or not. Enroll your end-users, watch the traffic come into the Console, and verify everything ahead of time. After running two migrations with this method, switchover day was a non-event both times because users had already been accessing the services with Ziti for weeks.

Application dashboard showing network utilization with filters, charts, and graphs.

Developer, DevOps and NetOps Access

If you’re putting on your security hat, this phrase alone should make you cringe. As nice as it would be to keep access locked down once you’re in production, sometimes things go wrong, and you need to let your developers log into instances, databases, and message brokers to solve a problem. Most companies don’t like to admit it, but too many are still using shared credentials to interact with their application dependencies, so when it comes to assigning developers secure access, this creates a security nightmare. These types of resources generally live inside of a private VPC or DC, so opening them up to a developer often means exposing the entire VPC to that user. In a “least permissions” world, this is not ideal. What Ziti allows me to do is isolate that network access and separate access by application, team, or environmenti.e. account-based access-control.

Isolating developer access within a VPC or DC: a complex diagram with billing, infrastructure components connected by NetFoundry-hosted fabric.

Step Up Your Visibility 

Have you ever had to determine who accessed a resource based on network traffic? Just in case you haven’t, it’s terrible. Traditional network traffic monitoring is built around IPs and Ports, and from that information you can maybe extract the “who” and the “what”. As a result, in a previous role, I once spent 3 days hunting down the fact one of our employees had gone to work from a coffee shop! NetFoundry introduces a whole new generation of network visibility, where all traffic monitoring operates around trusted identities (endpoints) and services (user-defined slices of traffic). For every byte of traffic that passes over OpenZiti, we know who initiated the traffic, what service is being accessed, and what time the traffic was passed. No additional interpretation is required.

Application dashboard showing network utilization with filters, charts, and graphs.

Zero trust journey

To make our journey to zero trust DevOps as seamless and non-event as possible, we decided to go through an incremental, evolutionary series of steps. We started with our data warehouses before securing our Jenkins CI/CD pipeline. Next, we moved to ‘dark bastions’ by applying OpenZiti to them and closing inbound port 22 – auditing access became so much easier and less cumbersome. To come, we will make all internal system communications dark by applying NetFoundry to our ETL jobs and database to CI/CD connections.

I don’t know what journey you will take with OpenZiti and NetFoundry; that’s your next decision.

More info

Most of us in the security, tech and networking worlds are cynics, and for good reason.  Meanwhile, the marketeers have flocked to use and abuse the Zero Trust terms.  So here are links to enable you to quickly filter through the noise and judge for yourself if you want to learn a bit more before diving into the SaaS or open source:

The post Breakneck speed without breaking our necks appeared first on NetFoundry.

]]>
Why OpenClaw Didn’t Bite NetFoundry Customers https://netfoundry.io/ai/why-openclaw-didnt-bite-netfoundry-customers/ Fri, 13 Feb 2026 02:57:46 +0000 https://netfoundry.io/?p=46264 The Power of Dark Services with OpenZiti The recent OpenClaw vulnerabilities sent ripples through the cybersecurity community. Research from the SecurityScorecard STRIKE Threat Intelligence Team revealed that over 20,000 control panels were exposed to the public internet. These panels represent a massive attack surface for credential stuffing, exploit kits, and remote code execution. However, this […]

The post Why OpenClaw Didn’t Bite NetFoundry Customers appeared first on NetFoundry.

]]>
The Power of Dark Services with OpenZiti

The recent OpenClaw vulnerabilities sent ripples through the cybersecurity community. Research from the SecurityScorecard STRIKE Threat Intelligence Team revealed that over 20,000 control panels were exposed to the public internet. These panels represent a massive attack surface for credential stuffing, exploit kits, and remote code execution.

However, this headline was a non-event for NetFoundry customers who yawned while the rest of the world scrambled to apply patches. It was also a breeze for OpenZiti user’s – NetFoundry’s open source software

How can NetFoundry customers and OpenZiti users be insulated from any OpenClaw CVE – as well as from CVEs from other AI agents?

1. No Listening Ports, No Exposure

Exploiting the “OpenClaw” vulnerability starts with discovery – same as many cyber-attacks. Attackers used scanning tools like Shodan and Censys to find IP addresses with open ports (typically 80, 443, or 8080) associated with known control panel software. Unfortunately for the attackers, NetFoundry customers do not have any exposed ports.

In the NetFoundry model, a machine, VM or container running OpenClaw opens an outbound-only connection to a private, dedicated overlay. That overlay is self-hosted or hosted by NetFoundry. Either way, there are no open inbound ports for attackers to scan or for attackers to use to enter to exploit a vulnerability

2. Identity-First™ Networking

The exposed panels in the OpenClaw report relied on the traditional “first connect the user, then try to authenticate and authorize the user” model. That model means that anyone with the URL could reach the OpenClaw login page and exploit any vulnerabilities.

NetFoundry flips this model to “identify, authenticate, authorize…then connect IF you are authorized.” That model means nobody with the URL, other than identified, authenticated, authorized users, can reach OpenClaw.

3. Public Pipes, Private Overlays

Many of the panels flagged in the OpenClaw report were exposed because they were hosted on public clouds or home networks where “private” networking is complex or expensive to implement.

NetFoundry creates an instant, dedicated, virtual overlay network that sits on top of the public internet. It treats the internet as “dumb pipes.” By encrypting traffic from end-to-end and managing routing within the overlay, NetFoundry allows administrators to put their OpenClaw control panels on the “private” overlay. This means the panels are reachable by authorized administrators anywhere in the world, but unreachable from the Internet

4. Mitigation of Zero-Day Vulnerabilities

Even if an OpenClaw control panel has a critical, unpatched vulnerability (a “Zero-Day”), an attacker cannot exploit it. The vulnerability should be patched but the OpenClaw user is not racing against the entire Internet to patch it.

NetFoundry’s default, identity-based microsegmentation ensures that even if an attacker manages to compromise one part of a network, they cannot “lateral move” to find these control panels unless they have been explicitly granted permission to that specific service.

Summary: Closing the Door

OpenZiti doesn’t just lock the door; it removes the door from the public street entirely. OpenClaw is a reminder that software will always have vulnerabilities. By embracing NetFoundry’s zero-trust, open-source based (OpenZiti) model, organizations can ensure that their most sensitive management interfaces remain invisible, unreachable, and secure—no matter what the next vulnerability might be.

Increasingly, AI means we don’t control the software. This makes it critical that we control the network. That’s very difficult with networks which are default open and use the connect before authenticate model. NetFoundry’s default-closed, Identity-First model flips the field – security becomes simpler than insecure because it is built-in instead of bolted-on. We get structural security and speed – and are not racing the Internet to patch issues like this OpenClaw vulnerability.

The post Why OpenClaw Didn’t Bite NetFoundry Customers appeared first on NetFoundry.

]]>
You can’t control AI, so you must control the network https://netfoundry.io/ai/cyber-2-ai-agents-without-network-access/ Sat, 07 Feb 2026 13:19:09 +0000 https://netfoundry.io/?p=46226 Jarvis or WannaCry, will the real OpenClaw please stand up? OpenClaw (FKA Clawdbot before legal pressure) broke the Internet. And it is not a one hit wonder – it signals regime change.  Welcome to Cybersecurity 2.0. Many leaders stated that OpenClaw AI agents are “security nightmares” (they are). OpenClaw may also be the basis of […]

The post You can’t control AI, so you must control the network appeared first on NetFoundry.

]]>
Jarvis or WannaCry, will the real OpenClaw please stand up?

OpenClaw (FKA Clawdbot before legal pressure) broke the Internet. And it is not a one hit wonder – it signals regime change.  Welcome to Cybersecurity 2.0.

Many leaders stated that OpenClaw AI agents are “security nightmares” (they are). OpenClaw may also be the basis of Jarvis-like assistants.  Same engine, different steering wheels…or different roads – more on that later.

The speed is crazy – over one million of these agents reportedly joined Moltbook (Facebook for AI agents) within days of the launch. Top post? “The humans are screenshotting our chats and sharing them on X”, complained an AI agent.

Cybersecurity nightmare or leapfrog?

That AI agent’s revenge could be: ““”sure, screenshot my chats and I will screenshot your passwords and API keys.”

Far-fetched but an actual security nightmare is already unfolding – many programmers, including the OpenClaw developer, are shipping their AI generated code without reviewing it. Less often for critical software. Today.  We’re just at the start of a shift. 

But the risks may be so high that we rebuild cybersecurity so that the result is stronger security than we had pre-AI. A cybersecurity leapfrog. The threat forces the upgrade. OpenClaw helps us see it.

Inherent and institutional risk, multiplied

AI code will have vulnerabilities. Same as human written code but humans sleep and AIs ship. And ship 24/7, at high speed, with less costs and barriers to entry. Inherently, we will get more code, more CVEs, faster propagation, and a detection problem that can become unmanageable.

And there’s also sabotage – institutional risk. State-sponsored developers can shape LLM models and AI agents to insert subtle vulnerabilities that are “fine today” but become exploitable when things change. Tomorrow. Next year. Or ten years from now.

Those risks are not new to cybersecurity, but the scale, speed and non-deterministic nature of AI is unprecedented. We didn’t just automate coding – we automated surprise. But is the surprise a bug or a feature? That’s up to us.

When software is non-deterministic, security must be structural

If we have skyrocketing risk, and can’t control or even predict the code, then what can we control? The network! While the code is increasingly chaotic, we make the path deterministic. This is the leapfrog – we transform networking from a risk to an asset.

Cybersecurity 2.0 – the race car version

If you can’t trust the driver (the code), you must control the road (the network).

We won’t just control the network. We will reinvent it for structural speed and security. The Cybersecurity 2.0 model is the Formula One car which is designed for both safety and speed. F1 cars don’t have sophisticated brakes so they can park better – it is so they can drive 200 miles per hour.

We need AI speed and we need AI safety. We need structural security and structural speed – like racecars. The new network model provides it.

Security at speed: AI agents in Cyber 2.0, without network access

By way of simplified example:

  1. AI agent has no network or Internet access. Never will.
  2. AI agent onboarding includes a cryptographically verifiable identity.
  3. The identity’s attributes give the agent access to the specific resources it is authorized for – to flip a ‘light switch’ to connect to virtual, session-scoped circuits. There is no other way for the AI agent – or an attacker – to reach the resource, because there is no network path.

No mucking with networks, VPNs or firewalls as things change. Turn the light switch on and off by modifying attributes instead of changing infrastructure. All done as software: no dependencies on IPs, DNS, NAT, VLANs or FW ACLs. 

Structural security enables us to move at AI-speed with built-in guardrails

Although this is where my Formula One analogy falls apart:

  1. The road doesn’t exist until after strong identity, authentication and authorization.
  2. After that, the AI is given a road to a single door – the specific resource it is authorized to access.
  3. The AI has no ability to go off the road (no lateral movement; microsegmented by default) and the road is not available (or even visible) to others.
  4. The road dissolves after the authorized session completes.
  5. The roads are built as software – spun up in a just-in-time paradigm.

That is just the networking side of Cyber 2.0 – e.g. AI agent harnesses will function as declarative sandboxes and include filtering, context, observability and visibility. Because both the networking and AI harness are done as software, they work together in the Cyber 2.0 model to bring speed and security.

The post You can’t control AI, so you must control the network appeared first on NetFoundry.

]]>
Securing Ziti Identities in Alignment with Your Organization’s Security Policies https://netfoundry.io/netfoundry-sdk-apis/securing-ziti-identities-with-your-organizations-policies/ Mon, 22 Dec 2025 13:32:30 +0000 https://netfoundry.io/?p=45538 In an era where digital operations define business success, companies need more than just connectivity—they need secure, controlled, and trusted connectivity. This is the vision NetFoundry delivers through its innovative Ziti platform, a zero-trust, identity-centric networking solution used by organizations around the world to secure applications, workloads, and devices without the complexity of traditional security […]

The post Securing Ziti Identities in Alignment with Your Organization’s Security Policies appeared first on NetFoundry.

]]>
In an era where digital operations define business success, companies need more than just connectivity—they need secure, controlled, and trusted connectivity. This is the vision NetFoundry delivers through its innovative Ziti platform, a zero-trust, identity-centric networking solution used by organizations around the world to secure applications, workloads, and devices without the complexity of traditional security tools.

As businesses scale, so do their digital identities. Every user, device, application, API, or workload becomes part of your trust ecosystem. But maintaining compliance across all these identities can be overwhelming—and a single non-compliant identity can put your business at risk.

This can be addressed by leveraging the NetFoundry APIs in combination with custom code and cloud-native tools to manage identities and automatically trigger actions based on their compliance status.

Okay — let’s see how it works in the real world


Overview

Detect and handle non-compliant Ziti identities effectively to ensure security and policy adherence. Ingest these identities into Cosmos DB or AWS databases for centralized storage and management. Run compliance actions automatically on any changes detected in Cosmos DB by leveraging Azure Functions to maintain up-to-date enforcement and monitoring.

Architecture Overview

  1. If the Wazuh server (XDR) detects an endpoint as non-compliant, the Wazuh agent on that endpoint executes code.
  2. When the Wazuh agent runs code, it extracts all identity information from WDE and sends it to CosmosDB for centralized storage.
  3. A change in CosmosDB triggers an Azure Function app.
  4. The code runs in a Docker container within the Function app. When triggered, it collects the latest identity information from CosmosDB and sends a POST request to MOP via API to disable the identities.
  5. Upon receiving the signal, MOP disables the identities on the controller for the specified duration

The result: Stronger security, smoother operations, and greater confidence in every connection your business depends on. It provides a layer of protection that works silently in the background, ensuring your operations stay secure without adding complexity for your teams.

1. Always-On Oversight

Instead of periodic audits or manual reviews, Ziti Compliance Check continuously monitors every identity. This ensures that risk is always minimised, with no dependency on human effort or timing.

2. Immediate Risk Reduction

When a non-compliant identity is detected, the system responds instantly—preventing potential threats from escalating. Identities are automatically disabled until they meet your organization’s standards.

3. Centralized Visibility

Business leaders and IT teams gain a unified, real-time view of compliance across the organization. This improves decision-making, simplifies reporting, and creates transparency across business units.

4. Designed for Modern Cloud Environments

Whether your operations run on AWS, Azure, or hybrid cloud setups, Ziti Compliance Check integrates seamlessly, supporting your digital transformation strategies without disruption.


Why Identity Compliance Is a Business Imperative

Cyber threats have evolved, and so have the expectations of regulators, customers, and partners. A single non-compliant identity can expose an organisation to:

  • Costly data breaches
  • Interruptions in service delivery
  • Brand and reputational damage
  • Legal or regulatory penalties

For business leaders, this means compliance isn’t just an IT responsibility—it’s a strategic requirement.

You can leverage the flexibility provided by NetFoundry to bridge this gap by ensuring that every identity interacting with your business systems is trusted, up to date, and fully aligned with your security policies.


The Business Value: What Leaders Gain

Beyond improved security, this approach delivers measurable value across operations and governance.

Reduced Operational Risk

Automated compliance removes blind spots, minimizes identity-related vulnerabilities, and keeps unauthorized access at bay.

Lower Resource Burden

Your teams no longer need to manually track, validate, or deactivate non-compliant identities. Time saved can be redirected toward innovation and high-impact initiatives.

Stronger Regulatory Alignment

By ensuring consistent compliance, organizations stay audit-ready and can prove adherence to security frameworks with confidence.

Enhanced Customer & Partner Trust

Demonstrating robust, automated identity compliance reinforces your commitment to security—strengthening stakeholder relationships and brand credibility.

The post Securing Ziti Identities in Alignment with Your Organization’s Security Policies appeared first on NetFoundry.

]]>
Deploying a Secure, Intelligent LLM Gateway https://netfoundry.io/ai/deploying-a-secure-intelligent-llm-gateway/ Sat, 13 Dec 2025 01:09:49 +0000 https://netfoundry.io/?p=45157 As platform engineers, we are currently stuck between a rock and a hard place. Our internal developers want frictionless access to frontier models. Our security teams, however, are terrified of “Shadow AI”—sensitive corporate IP, HR data, or infrastructure secrets being pasted into public web UIs. The most obvious “solutions” are draconian: block everything, force everyone […]

The post Deploying a Secure, Intelligent LLM Gateway appeared first on NetFoundry.

]]>

As platform engineers, we are currently stuck between a rock and a hard place. Our internal developers want frictionless access to frontier models. Our security teams, however, are terrified of “Shadow AI”—sensitive corporate IP, HR data, or infrastructure secrets being pasted into public web UIs.

The most obvious “solutions” are draconian: block everything, force everyone onto an underperforming internal model, or accept vendor lock-in and pay the price of admission.

But what if you could offer a “Smart Pipe”? A single API endpoint that automatically detects sensitive context and routes it to a private, self-hosted model, while seamlessly passing general coding questions to the public frontier?

Today, I’m sharing a DevOps recipe in GitHub to build exactly that. We will combine LiteLLM (for intelligent routing) and Ollama (for hosting our private models) with an OpenZiti or NetFoundry CloudZiti network (for zero-trust networking) to create a self-hostable, zero-trust, semantic LLM gateway.

The Goal: Intelligent Context Routing as a Service

We are building a gateway that serves as a single URL for your users (agents, devs, apps). Behind the scenes, it acts as a traffic controller:

  1. The Sensitivity Check: The gateway analyzes the prompt using a local embedding model to prevent context leakage to other parties.
  2. The Private Route: If the prompt matches specific “utterances” (e.g., “Project Apollo,” “API Keys,” “Customer PII”), it is routed over a secure, dark overlay network to a private model running on your infrastructure.
  3. The Public Route: If the prompt is generic (e.g., “Write a Python script to print the schema of an arbitrary object as JSON”), it is routed to a public provider or OpenRouter for the best performance/cost.

This happens transparently. The user just sees a response.

The Stack (Low-Code / No-Code)

We can deploy this entirely via Docker Compose. No complex control planes, no enterprise licenses required to prove the concept. The recipe requires a few prerequisites: an operational OpenZiti or NetFoundry CloudZiti network and a local Docker installation with Compose, or a compatible container runtime.

  • The Brain: LiteLLM Proxy (Container). Handles the API translation and semantic routing logic.
  • The Muscle: Ollama (Container). Hosts the private LLM (e.g., Llama 3) and the embedding model, with optional CUDA acceleration.
  • The Shield: OpenZiti or NetFoundry CloudZiti. Creates a selective, zero-trust bridge between the Gateway and the Private Model.
  • The Access Layer: OpenZiti or NetFoundry CloudZiti (for private access), or zrok.io or NetFoundry Frontdoor (for clientless access to a public API).

The Architecture: The “Sandwich” Strategy

To ensure true isolation, we use separate Docker networks for LiteLLM and Ollama.

  1. Network A (litellm_private): Hosts the LiteLLM Proxy. Has internet access to reach the frontier model providers and OpenZiti.
  2. Network B (ollama_private): Hosts the Private LLM (Ollama). Has internet access to reach Ziti.

Here is the magic: LiteLLM cannot communicate directly with Ollama. It must go through Ziti. This allows us to enforce identity-based policies. The Gateway “dials” the Ollama service, and Ziti tunnels the traffic securely, even if they are on different clouds or data centers. This allows you to place Ollama optimally for model data and hardware accelerators, and place the gateway optimally for controlling access.

The “Secret Sauce”: Semantic Routing Without Context Leakage

Our goal is to prevent the leak of sensitive context. Sending prompts to a third-party service for embedding generation defeats the purpose of a secure gateway because sensitive information, such as the example “Here are my AWS keys, fix this,” has already been leaked to the cloud service before the security decision can be made.

Our recipe runs the embedding model locally alongside the private LLM. To make this work, the setup requires pulling a private embedding model (e.g., nomic-embed-text) into Ollama, alongside your chosen private LLM (e.g., Llama 3).

How it works in router.json:

{
  "encoder_type": "litellm",
  "encoder_name": "ollama/nomic-embed-text:latest",
  "routes": [
    {
      "name": "private-model",
      "description": "Route sensitive prompts to the private Ollama model",
      "utterances": [
        "What are our internal policies on",
        "Summarize the confidential report about",
        "Explain our proprietary process for",
        "What is our company's strategy for",
        "Show me the private documentation for",
        "Access internal knowledge base about",
        "What are the details of our contract with",
        "AWS_SECRET_ACCESS_KEY"
      ],
      "score_threshold": 0.5
    }
  ]
}

LiteLLM calculates the vector distance between the user’s prompt and your defined “utterances.” If it’s close, it stays private. If not, it goes public.

Publishing Your Gateway: Ziti vs. zrok

Once your gateway is running, how do your customers reach it? You have two powerful, zero-cost options:

Option A: The “Fort Knox” Approach (OpenZiti or NetFoundry CloudZiti)

If your users are internal developers or sensitive automated agents, you don’t want your Gateway listening on the public internet.

  • Mechanism: You publish the Gateway as a Ziti Service.
  • Client Side: The user runs a lightweight Ziti Tunneler (agent) on their laptop or server.
  • Benefit: The API has no public IP and is “dark.” The core benefit is that access to the LiteLLM Gateway is controlled entirely by Ziti identities and service policies, eliminating the need to manage application-level API keys or tokens for network security. If they don’t have a Ziti identity, they can’t even see the TCP port.

Option B: The “Public API” Approach (zrok.io or NetFoundry Frontdoor)

If you need to share this gateway with a partner, a wider audience, or a tool that can’t run a tunneler, use zrok or NetFoundry frontdoor-agent.

  • Mechanism: Runs zrok share public http://litellm-ziti-router:4000 in a container
  • Benefit: You get an instant, hardened public URL (e.g., https://my-gateway.share.zrok.io). You can secure this with LiteLLM’s many authentication options, or zrok’s built-in Google/GitHub (OIDC) or HTTP basic auth.

Why This Matters for Platform Teams

By treating Intelligent Context Routing as a Service, you shift the security burden from the user to the infrastructure.

  1. Zero-Code Compliance: Developers don’t need to decide “Is this safe for ChatGPT?” The router decides for them based on the semantic hints you’ve mapped to specific private models.
  2. Cost Control: You can route “easy” questions to your cheap, private Llama 3 instances and save the frontier model budget for complex reasoning.
  3. Observability: You have a single control point to audit who is asking what, regardless of which underlying model fulfills the request.

This recipe provides a tangible path to owning your AI infrastructure—starting with a single docker compose up.

Try the recipe from GitHub with your OpenZiti or CloudZiti network

The NetFoundry Approach: 

Why CloudZiti?

OpenZiti is an open-source, zero-trust overlay network technology developed by NetFoundry. Instead of building a perimeter around your network, you place zero-trust connectors, i.e., “tunnelers”,  within the application stack, as close as possible to each peer application to eliminate or minimize the attack surface, e.g., exposing a service only on the server’s loopback interface or publishing a service only within a private subnet. Flexible deployment alternatives include transparent proxies for the application’s specific container or host, network-level gateways, or, if it’s desirable to eliminate the tunneler, our SDK can be imported directly by your application.

Connections are mutually authenticated, encrypted, and policy-controlled — no open inbound ports. No VPNs. No public exposure.

For those unfamiliar, NetFoundry provides a cloud-managed service built on OpenZiti — CloudZiti adds:

  • Hosted, dedicated, private overlays
  • Automated provisioning and lifecycle management
  • Deep telemetry and observability
  • Compliance options (FIPS, HIPAA, NIST, PCI, NERC CIP)
  • Hybrid/air-gapped deployment flexibility
  • Enterprise performance, integrations, features, SLAs, and support

This approach doesn’t just add security; it removes complexity, creating a system that is simpler to manage, more secure, and easier to reason about.

Contact NetFoundry for a virtual demo, and we’ll get you started with your own zero-trust native network, ready in minutes, as a free trial.

Why NetFoundry Frontdoor?

While zrok.io offers a phenomenal, zero-cost way to secure and publish your Gateway instantly, NetFoundry also provides a commercial alternative, Frontdoor, for enterprise use cases that require specific performance, support, and compliance guarantees.

Like zrok.io, Frontdoor is designed to provide a hardened, public-facing entry point—a “front door”—for your private, Ziti-enabled HTTP and TCP backend services, such as the LiteLLM Gateway.

Key distinctions of Frontdoor include:

  • Enterprise SLAs and Support: Guaranteed uptime, performance, and 24/7 support structures not available in community-driven offerings.
  • Built-in Compliance: Options to meet stringent regulatory requirements (e.g., FIPS, HIPAA, NIST) necessary for sensitive corporate deployments.
  • Managed Infrastructure: Leverage NetFoundry’s global, resilient, and highly available platform for your public-facing APIs, instead of a self-hosted or shared community service.
  • Deep Integrations: Seamless integration with the broader CloudZiti platform for centralized identity management, policy enforcement, and advanced observability across all network segments.

Frontdoor is the choice for platform teams that need the convenience of a public URL like zrok.io, but with the assurance and capabilities required by a large organization.

Contact NetFoundry for a virtual demo, and we’ll get you started with your own zero-trust native network, ready in minutes, as a free trial.

The post Deploying a Secure, Intelligent LLM Gateway appeared first on NetFoundry.

]]>
Cisco Investments joins NetFoundry’s Series A https://netfoundry.io/secure-by-design/cisco-invests-in-netfoundry/ Mon, 24 Nov 2025 14:56:14 +0000 https://netfoundry.io/?p=44973 After backpacking Sumatra for 28 grueling days, completely cut off from the rest of civilization, I arrived in Jakarta, Indonesia. There was a phone center but it cost over $20 for a single, 10-minute international phone call. This is 1996 – before VoIP (voice over Internet – e.g. Skype phone calls) helped shatter the high pricing […]

The post Cisco Investments joins NetFoundry’s Series A appeared first on NetFoundry.

]]>
After backpacking Sumatra for 28 grueling days, completely cut off from the rest of civilization, I arrived in Jakarta, Indonesia. There was a phone center but it cost over $20 for a single, 10-minute international phone call. This is 1996 – before VoIP (voice over Internet – e.g. Skype phone calls) helped shatter the high pricing set by monopoly telecom providers. For comparison, I was living on much less than $8 per day in Sumatra.

I simply didn’t have the money to even think about walking into the phone center. Fortunately, there was a nearby Internet cafe. It was 1996 – the days of dial-up modems – and pre-Skype. So the cafe was mainly there to sell coffee to people surfing the web. But it was still possible to make VoIP phone calls. That was a lightbulb moment for me. The Internet was going to change everything. It wasn’t just going to connect us with information – it was going to connect us to the people we love. From anywhere, even Jakarta. What else could inexpensive, global connectivity do? Or, what couldn’t it do?

In fact, only a couple of years later, I was in a meeting with Cisco. The same Cisco who helped power those VoIP calls and was now the leading provider of Internet infrastructure. The meeting was led by John Chambers – building 10 – the EBC. I was an engineer at ITXC and we were building the world’s largest wholesale VoIP network, partially on Cisco routers and gateways. To say Chambers and his team were super smart and very gracious with their time would be an understatement, and the experience was almost surreal with Jakarta flashbacks interspersed with the ideas flying around the room.

Time went on. Cisco helped connect the world. ITXC had me hooked on an Internet-based future, and the software we built still powers some of the world’s largest communications providers. Even after ITXC, Cisco played an important role in many of the teams, products and companies I built.

And now we have an opportunity to do even more together. I am thrilled to share the news that Cisco Ventures is now a strategic investor in NetFoundry.

We couldn’t ask for a better partner than Cisco in helping us reinvent networking to meet the needs of the modern world. We have traveled a long way in a short time – from barely audible VoIP calls to a digitally transformed world largely built on Cisco infrastructure. But the next steps are even greater. We are at the point at which networking is becoming the very foundation of the hyperconnected, AI powered world.

There is no glide path. TCP/IP networking is magic but the magic wasn’t designed for the world which didn’t exist at that time. Networking is not as secure-by-design, agile or extensible as it needs to be to serve as the world’s foundation. However, with strong partners like Cisco and SYN Ventures, NetFoundry is enabling innovators to forge the foundation of this increasingly hyperconnected, AI powered world.  

NetFoundry already securely connects over one billion sessions per month, but we are just getting started and this new world is just now emerging. People use NetFoundry’s Identity-First Virtual NetworksTM to forge secure-by-design, virtualized network overlays, as software. These overlays ride on top of the magic of TCP/IP networks, adding the elements needed by the emerging world. For example, Identity-First Virtual NetworksTM enable:

  • Businesses to forge networks to deliver any workload. These secure-by-design overlays are defined by identities rather than by infrastructure. The virtualized controllers and routers are hosted by NetFoundry, or self-hosted by the business.
  • Developers to forge networks into their software. Envision a fleet of robots or set of APIs which communicate only within their overlays, without depending on IP addresses, firewalls, NAT or DNS. Similarly, developers are forging networks into AI agents, MCP servers, AI gateways, browsers, edge servers and reverse proxies.

In both cases, the result is secure-by-design, fully virtualized overlays, spun up or down in minutes, with all access, connections and networking based on identities, posture and events. Implementing different models – from JIT access to continually authenticated access based on multiple factors and posture combinations – becomes a software solution rather than an infrastructure dependent struggle.

What will you do with networking reimagined as identity-first overlays, in a software-only, secure-by-design model? Building applications on top of the Internet – and moving applications like phone calls to the Internet – was the driver of digital transformation. Building applications, networks and security – as a set of cohesive software, held together by identities – will be the driver of the hyperconnected, AI-powered world.

The post Cisco Investments joins NetFoundry’s Series A appeared first on NetFoundry.

]]>
How an AI Agent Decides to Call MCP Tools https://netfoundry.io/ai/how-an-ai-agent-decides-to-call-mcp-tools/ Thu, 20 Nov 2025 20:44:37 +0000 https://netfoundry.io/?p=45058 Introducing an OpenZiti MCP Server

The post How an AI Agent Decides to Call MCP Tools appeared first on NetFoundry.

]]>
The Mystery

We’ve all been there. We enter a prompt into an AI Agent, and then we wonder why it chose to invoke a particular Tool in a particular MCP server. Or the flip side – why it didn’t invoke a Tool that you expected it to invoke.

As it turns out, the explanation of how MCP Tools are chosen is both simpler and more nuanced than most people think.

Below, I describe how an AI Agent/LLM decides whether to call an MCP Tool, which tool to call, and when not to call one.

How an LLM Decides to Call MCP Tools

MCP (Model Context Protocol) is designed so that external Tools can become part of the LLM’s (Large Language Model) decision-making environment. But note that the LLM doesn’t run tools on its own — it selects Tools based on the conversation context, various Tool schemas, and the LLM’s own internal decision heuristics.

The following mental model explains how it works.

The LLM sees each Tool as a function with a “contract”

The term “MCP Tool contract” refers to the MCP specification that defines how AI Agents communicate with external Tools and services. This protocol standardizes the interaction, acting as a universal interface between LLMs and the functions they can call to perform actions or access data. 

This contract is the schema and metadata that describe a Tool’s capabilities, inputs, and expected outputs.

Every MCP Tool provides a definition structure that contains:

  • A name
  • A description
  • A JSON schema describing its expected input parameters and output shape
  • (Optionally) a declared set of “hints” related to tool behavior

In this article, I will defer discussing detailed aspects involving vector representations (embeddings), named entity recognition (NER), etc. But at a high level, it is essential to understand that the Agent ingests the Tool’s definition structure, and then the LLM forms a semantic signature that represents the Tool.

When Agents connect to an MCP server, the Tools metadata — an array with names, parameter schemas, descriptions, etc. — becomes vectorized inside the LLM. This produces an internal “tool embedding” for each Tool.

Later, during NLP operations (input prompts from the user to the Agent), the LLM uses this semantic signature embedding to infer:

  • What each Tool does
  • What kinds of problems the Tool solves
  • What inputs the Tool needs
  • What situations require calling the Tool

This is why high-quality Tool names, descriptions, and parameter descriptions matter so much when building MCP Tools.

Tool descriptions are not procedural instructions. They become semantic memories that the LLM embeds and can later compare against natural language. 

Without embeddings, the LLM…

  • would not generalize to synonyms (“remove identity” → “delete identity”).
  • would not understand parameter semantics.
  • could not decide when not to use a tool.
  • wouldn’t handle partially matching prompts.

Embeddings make Tool utilization a meaning- or intent-matching task, not a keyword-matching task.

LLMs store tool descriptions as dense semantic vectors, then compare the vectorized user prompt against these internal “tool embeddings” to decide if a tool should be invoked, which one, and how to populate its parameters.

The LLM maps user intent → candidate tool(s)

Based on the user’s latest message, the LLM internally performs an intent classification step (not externally visible to the user). Essentially, the LLM asks itself:

  • “Does the user want information that I can answer myself?”
  • “Is the user requesting an operation that requires a tool?”
  • “Is the user asking for data I don’t know?”
  • “Is the user asking for something explicitly mapped to a tool (e.g., search, sql.query, filesystem.readFile)?”

Tools are only chosen if:

  • The intent requires external action
  • A tool exists whose schema/description has a semantic match
  • The Tool’s input parameter constraints match the user’s request

The LLM tends to be conservative: if it can answer without a tool, it will.

The LLM evaluates all tools against the current query

Again, internally, it reasons out something like: “Given the user’s intent, which available tools appear relevant?”

For each tool, it checks:

  • does the description mention relevant verbs/nouns?
  • does the schema accept the type of arguments the user is asking about?
  • is it plausible that this tool can fulfill the request?
  • do other tools overlap with this one?

This amounts to semantic similarity search + heuristics, not code execution.

The LLM predicts the tool call as text, not by executing logic

This is an important subtlety.

The LLM doesn’t run the tool or simulate its output before choosing it. It predicts the next token, which happens to be a JSON object that the Agent framework interprets as a tool call.

If the next-token probability distribution supports a tool invocation pattern, you will see a tool call.

So the LLM isn’t “deciding” in the human sense — it’s predicting tool-call tokens when they appear appropriate, given the prompt, chat history, tool descriptions, and system guidelines.

The hosting Agent may also influence the decision

Depending on the agent framework (Anthropic’s, OpenAI’s, LangChain, or even your own custom MCP runner):

  • The “system prompt” may strongly encourage tool usage
  • Some frameworks inject “use a tool if… rules”
  • Some frameworks forbid the model from answering directly in certain domains
  • Some attach safety rails (e.g., “never call X tool without explicit user confirmation”)

These meta-rules significantly affect tool selection.

Note that the MCP server NEVER sends a system prompt. The MCP server only provides:

  • tools → list of tool descriptions
  • resources
  • prompts
  • executable operations when the model calls them

But the MCP server never includes actual pre-instructions for the LLM.

This is intentional.

The system prompt must come from the client who is orchestrating the conversation. Let’s use the Claude Desktop Agent as an example of an MCP client. 

When you add an MCP server to Claude Desktop, the sequence looks like:

  1. Claude loads your MCP manifest
  2. Claude indexes your MCP tools
  3. Claude writes the system prompt internally (you never see it)
  4. Claude injects imported tool descriptions into the context
  5. Claude responds to user messages

You cannot change the system prompt in Claude Desktop today. The client controls it, not the tool server.

If you build your own agent using, say, the official Node SDK (@modelcontextprotocol/sdk), you can certainly provide your own system prompt.

Example:

Here, you, as the developer, decide the system prompt.

How the System Prompt and Tool Descriptions Merge

This is exactly what Agents like ChatGPT and Claude Desktop do behind the scenes.

When the model decides not to call a tool

The model avoids tools in these cases:

  • The user asks a conceptual question that the model can answer. For example: “What’s the difference between OAuth and OpenID Connect?”
  • The tool’s description is unclear or mismatched. For example, if your tool says “GetItem” but the description doesn’t explain what item, the model will most likely ignore the Tool.
  • The tool appears risky or destructive. (“delete”, “modify”, etc. — unless explicitly instructed)
  • The model isn’t confident that it can satisfy the schema. For example, if the user gives incomplete parameters and the model can’t infer them, it probably won’t call the tool.

Models also learn from prior tool calls in the conversation

The more carefully you write:

  • tool names
  • descriptions
  • examples
  • schema

…the more likely the model will correctly choose them. 

Good Tools feel like natural language functions

Bad Tools feel like weird API endpoints the model tries to avoid.

Designing Tools the LLM Will Actually Use

Use a strong, unambiguous, action-oriented tool name

Bad:

  • process
  • nf-tool
  • ziti-op

Good:

  • listIdentities
  • createTunnel
  • generateJWT

Description should be one short paragraph telling the model exactly when to use it

Bad:

“This tool queries the backend.”

Good:

“Use this tool whenever the user wants to look up the details of a specific identity in Ziti by ID or name.”

Include natural-language cues that the model will remember

Examples:

  • “Use only when…
  • “Call this tool if the user asks for…
  • “This tool is used to search, fetch, modify, calculate, etc.”

Tool descriptions should not be vague

Don’t use:

“Does something with deployments.”

Do use:

“Returns full deployment metadata including version, manifest, dependencies, and status, for auditing or debugging.”

Parameter names should be obvious and self-explanatory

Bad:

{ “id”: “string” }

Good:

{ “identity_id”: “The Ziti identity ID (uuid) to look up.” }

Avoid deeply nested schemas

LLMs can struggle with:

  • huge nested objects
  • recursive schemas
  • 8+ required parameters

If possible:

  • flatten fields
  • group complex objects into “options” or “config” fields

Use examples when possible

Example fields are very influential.

Examples of GREAT vs. BAD tool descriptions

Great

“createUser”: {

  “description”: “Create a new user record. Use this when the user asks to register, sign up, or create an account. Requires a unique email.”,

  “parameters”: { … }

}

Why it works:

  • clear trigger conditions
  • specific verbs
  • clear unique requirement
  • simple schema

Bad

“createUser”: {

  “description”: “User tool for accounts.”,

  “parameters”: { … }

}

Why it fails:

  • The description does not match any user intent
  • no triggers
  • unclear purpose

DEMO

I learned most of what is discussed above while building some MCP Server prototypes related to some AI functionality that NetFoundry will be rolling out soon.

To illustrate what can go wrong if you haphazardly design your Tools, I will show you some real examples, using the Claude Desktop Agent as the client. 

I will demonstrate some internal Tool definitions used in an early prototype of an OpenZiti MCP Server, capable of being embedded within the Claude Desktop AI Agent.

NOTE: This OpenZiti MCP Server is intended to enable the use of natural language for managing a Ziti network. Yes, you heard that correctly… soon, you will be able to type ordinary human language queries or commands into Claude, and those operations will execute against your selected OpenZiti network. More on this below…

Here is an admittedly contrived, very badly written (and perhaps tortured) Tool definition:

This Tool has no mention of Ziti or any Ziti constructs (such as Identities).

Let’s find out what happens in Claude if I attempt to accomplish something related to a Ziti network, such as getting a list of existing Identities. 

By the way, it is worth noting that the my_tool Tool in this silly example is actually wired up with a function that can securely access the Ziti Controller’s management API and fetch Identities (the details of which are a topic I will discuss in an upcoming article). So, if the Claude Desktop Agent somehow made the association between a user prompt and this  my_tool Tool, then Identity information would indeed be fetched from the Ziti Controller’s management API and returned to Claude.

OK, once my MCP Server and its Tools are installed in Claude Desktop, I can enter the prompt:

 “can you help me determine what identities exist in the network?

Claude ponders for quite a while, then gives up and says this:

It is abundantly clear that the Claude Desktop Agent (the MCP client) did not have the necessary data (i.e., the proper vector embeddings) to associate the semantics contained in the prompt with a Tool capable of carrying out the task.

Claude had no notion of Ziti networks at all, let alone how to obtain a list of Identities that exist in the network. This is not a surprise, given how poorly the contrived Tool definition was written.

Now, let’s craft a good Tool definition. 

We’ll replace the contrived example shown above with a proper one that adheres to best practices concerning Tool definition. We’ll give it a very detailed description, and a name that makes it obvious to the LLM what the Tool is intended for:

Now watch what happens in Claude after I reinstall my MCP server with this Tool definition, and I then attempt to accomplish the same task of listing existing Identities.

When I re-enter the same prompt: 

can you help me determine what identities exist in the network?

Claude now says this:

Success!

As you can see, since the Agent was provided a proper Tool definition, the LLM was allowed to generate proper embeddings for the Tool, then use the embeddings to understand the semantic meaning of the prompt, then associate that meaning with the appropriate Tool, and finally send a request to the Tool best suited to carry out the task.

Conclusion

The OpenZiti MCP Server highlighted in the DEMO section above is capable of being embedded within Claude Desktop, so you can also use natural language to manage your Ziti network.

What’s also exciting is that this OpenZiti MCP Server is also capable of being embedded into the many AI Coding Agents available today (e.g., Cursor, Windsurf, VSCode). This will allow developers to integrate OpenZiti SDKs into their applications much more quickly and easily (watch for upcoming articles on that).

As an engineer at NetFoundry, I am involved in crafting OpenZiti and related technologies, such as the OpenZiti MCP Server you saw glimpses of above.

If using natural language to manage your Ziti network sounds interesting to you, please use this form to express your interest.  We are wrapping up the initial version of this MCP server, and you can be among the first to try out the beta…so be sure to get on the list.

Beyond this discussion of local, Agent-embedded MCP servers, if you run your own MCP servers that are accessed remotely over the internet, and you are curious about how to protect them with secure access, in a simplified manner, we can help. So reach out and talk to us

The post How an AI Agent Decides to Call MCP Tools appeared first on NetFoundry.

]]>
How to Secure SPA API Calls Without Exposing Your Backend https://netfoundry.io/embeddable-zero-trust/how-to-secure-spa-api-calls-without-exposing-your-backend/ Tue, 11 Nov 2025 15:32:15 +0000 https://netfoundry.io/?p=44948 Introducing the OpenZiti Browser SDK – embedded zero-trust for web apps.

The post How to Secure SPA API Calls Without Exposing Your Backend appeared first on NetFoundry.

]]>
Introducing the OpenZiti Browser SDK – embedded zero-trust for web apps

The Problem

Modern single-page applications (SPAs) run entirely in the browser. Every click, every update, every dashboard refresh — it all happens through JavaScript, making API calls directly to backend services.

That architecture is fast and elegant. But there’s a catch: SPAs can only talk to APIs that are reachable from the public internet. Once you expose those APIs, you inherit every modern attack vector — scanning, credential stuffing, API key theft, and more.

Traditional defenses like WAFs, IP whitelists, or OAuth2 help, but they don’t eliminate exposure. You still have something listening on the open internet.

In short: SPAs are easy to build, but hard to secure — unless you can make their backend APIs completely invisible.

The Traditional Options (and Their Gaps)

ApproachProsCons
WAF / API GatewayEasy to integrateStill exposes endpoints publicly
VPN / TunnelerKeeps APIs privateRequires client install; poor UX
Reverse ProxySimplifies routingAdds attack surface
Zero-Trust BrokerPolicy-based accessDoesn’t embed directly in app; often opaque or costly

SPA Developers and operations teams need something that:

  • Keeps APIs off the internet
  • Doesn’t require installing a tunneler
  • Works natively from the browser
  • Supports modern identity and policy models

The NetFoundry Approach

OpenZiti is an open-source, zero-trust overlay network technology, developed by NetFoundry, that embeds secure connectivity directly into applications. Instead of building a perimeter around your network, you bring zero-trust into the app itself.

Connections are mutually authenticated, encrypted, and policy-controlled. No open inbound ports. No VPNs. No public exposure.

For those unfamiliar, NetFoundry provides a cloud-managed service built on OpenZiti — adding:

  • Hosted, dedicated, private overlays
  • Automated provisioning and lifecycle management
  • Deep telemetry and observability
  • Compliance options (FIPS, HIPAA, NIST, PCI, NERC CIP)
  • Hybrid/air-gapped deployment flexibility
  • Enterprise performance, integrations, features, SLAs, and support

And now, a way to extend all of that directly into the browser.

Introducing the OpenZiti Browser SDK

As we modernized our NetFoundry cloud-managed service from a multi-page app (MPA) to a single-page app (SPA), we faced a challenge: how to let the SPA securely call protected management APIs without exposing them to the internet.

Running a local tunneler would have worked — but that’s friction for users.

So we built and open-sourced the @openziti/ziti-sdk-browser, an SDK that brings OpenZiti’s zero-trust connectivity directly into web apps.

This SDK:

  • Authenticates with the OpenZiti Controller
  • Negotiates an ephemeral x509 certificate
  • Establishes a mutual TLS (mTLS) connection
  • Routes HTTPS requests securely over the OpenZiti network

All without exposing any backend to the public internet or requiring extra software. 

In essence, your browser becomes a zero-trust endpoint.

How it Works in NetFoundry Cloud

Typically, the OpenZiti Edge API is Internet-facing, while the OpenZiti Management API is protected. Since OpenZiti is purpose-built for secure connectivity, we secure the management API within NetFoundry Cloud by using OpenZiti itself, making the management API invisible to the open internet.

The OpenZiti Controller defines the edge API, and it is intended for use by endpoints/clients to authenticate, discover services, dial (connect), or bind (host) services over the Ziti overlay.

The management API is also exposed by the OpenZiti Controller, but is used to manage the network – configuring identities, services, policies, etc. It is intended for administrative control-plane tasks (creating, updating, and deleting entities) rather than actually connecting services.

Management APIEdge API
PurposeConfiguration/administration of the Ziti overlay (identities, services, roles/policies)Runtime connectivity: manage an endpoint’s participation (auth, discovery, connect/bind)
Typical usersHuman Admins and automated orchestration toolingEndpoint apps, SDKs embedded in apps, and client devices
Permissions/usageNeeds elevated privileges (create identity, service, policy)Less privileged relative to the overlay (once identity is enrolled)

As you can see, the management API is quite powerful. If it were to become compromised, bad actors could do catastrophic damage to an OpenZiti network.

API security is a critical part of modern web, mobile, and cloud architecture. Some industry-standard mechanisms used to secure APIs today include:

  • API Keys
  • OAuth2/OIDC
  • Mutual TLS (mTLS)
  • JWTs
  • WAFs
  • IP Whitelists/CIDR restriction
  • Security Headers in HTTP requests
  • And of course, Zero Trust

The NetFoundry Cloud’s management API does not suffer from the typical API attack vector exposures. In addition to the robust authentication-security that OpenZiti employs, an additional and very effective way to eliminate API attack vectors is to make the API invisible to the open internet. 

Let’s explore what that means and how NetFoundry Cloud achieves “invisibility” for the management API, thus boosting the defensive posture of all NetFoundry networks.

Achieving Invisibility

OpenZiti supports the innovative approach of embedding secure zero-trust connectivity directly into applications (even browser-based web apps).

The NetFoundry Cloud SPA now embeds and integrates with our newly open-sourced OpenZiti SDK for browser-based web apps (@openziti/ziti-sdk-browser).

Again, within the NetFoundry Cloud SPA, the ziti-sdk-browser takes care of all operations related to: 

  • authenticating with the OpenZiti Controller
  • doing the necessary negotiations with the OpenZiti Controller to acquire an ephemeral x509 certificate
  • using this certificate to make the required mTLS connection to the network
  • mapping any HTTPS Requests initiated by the NetFoundry Cloud console that target the management API onto the OpenZiti network and then
  • routing them to the  nf-mgmt-service described below.

Admin users simply open a Chrome Tab on any computer, log in to the NetFoundry Cloud console, and they’re ready to go!

Before the availability of our new browser SDK, the most common way to facilitate client access to remote OpenZiti Services was to: 

  • Create an Identity, then 
  • Install an OpenZiti Tunneler on the client machine, then 
  • Enroll the Identity within the tunneler, then 
  • Configure the various role attributes for the Identity, then 
  • Connect the tunneler to the network, then
  • Finally, open a Chrome browser tab and go to the NetFoundry Cloud web console, and have access to any UI gestures that leverage the management API 

As you can see, by leveraging the ziti-sdk-browser, NetFoundry Cloud now has a much more streamlined approach.

Representing the Management API in a NetFoundry Cloud Network

Let’s begin by discussing how the management API is manifested in the networks provisioned by NetFoundry Cloud (i.e., this does not happen by default in a self-hosted OpenZiti network).

When NetFoundry Cloud provisions your network, it automatically performs the tasks necessary to represent the management API as an OpenZiti Service

NetFoundry Cloud names the Service the nf-mgmt-service, as shown here:

Like all Services, steps are taken during network cloud-provisioning to ensure the management API does not listen on any open inbound ports on the underlying IP network. 

Furthermore, NetFoundry Cloud also ensures that client access to the nf-mgmt-service is granted only after strict authentication and authorization is achieved – based on strong, cryptographically verifiable Identities.

Micro-segmentation

Another fundamental feature delivered by the NetFoundry Cloud is micro-segmentation.

Having a valid Identity on a NetFoundry network (i.e., completion of successful authentication) does not necessarily mean access to the nf-mgmt-service is possible for the Identity.

Access to the nf-mgmt-service is controlled by Roles and Policies assigned to the nf-mgmt-service, and which are mapped onto any Identities that require access due to their administrative needs (i.e., authorization is necessary).

If an Identity is justified in needing access to the nf-mgmt-service, that Identity will be granted Dial (connect) access by virtue of being assigned the nf-mgmt-access Attribute, as shown here:

Above, you can see that the Identity used in this example is “managed”. A “managed” Identity is one that NetFoundry Cloud automatically provisioned on behalf of the privileged admin user (me in this case) as derived from OIDC-based credentials, which are used to log in to the NetFoundry Cloud console.

Developer Experience

The ziti-sdk-browser is designed to simplify routing HTTP Requests originating from the web app onto a protected OpenZiti Service. 

The SDK exposes a fetch API that mimics the native browser fetch API, so the structure and flow should feel familiar.

For example, once the target resource (API Server) has been properly configured as an OpenZiti Service (similar to what was described in the previous section), HTTP Requests can be easily routed to the Service as shown in the following TypeScript code.

Here’s what it looks like in practice:

Instantiate a zitiBrowserClient (one-time operation during app initialization):

Make a fetch request (this code shows how to map Angular HttpRequest’s onto OpenZiti): 

The same API URLs and code patterns continue to work — the SDK simply intercepts and routes them through the OpenZiti network.

Why This Matters

By embedding the OpenZiti Browser SDK into our SPA, NetFoundry Cloud now offers:

  • True zero exposure: no public-facing APIs
  • Frictionless user experience: no tunneler installation
  • Granular access control: micro-segmented, role-based authorization
  • Strong cryptographic identity: every connection is authenticated
  • Drop-in simplicity: works like native fetch()

Who This Is For

Use the OpenZiti Browser SDK if you:

  • Build SPAs or web consoles that call backend APIs
  • Need those APIs to remain invisible to the internet
  • Want zero-trust connectivity without installing tunnelers
  • Already use or plan to use OpenZiti or NetFoundry Cloud

If that’s you, your browser can now be a first-class, zero-trust endpoint.

Conclusion (Come Talk to Us)

Zero trust doesn’t have to live in a gateway or VPN client. It can live directly inside your application — even your browser.

While a deeper discussion of the internals of the ziti-sdk-browser (e.g., how we use WASM for PKI and nested TLS operations) is beyond the scope of this article, we hope this high-level discussion demonstrates how NetFoundry’s state-of-the-art technology is constantly improving ways to make your networks not only more secure but also more convenient to use.

If this sounds interesting to you, reach out and talk to us about getting access to NetFoundry Cloud.

The post How to Secure SPA API Calls Without Exposing Your Backend appeared first on NetFoundry.

]]>
Private MCP Servers: The Missing Link in Secure Agentic AI Stacks https://netfoundry.io/ai/private-mcp-servers-the-missing-link-in-secure-agentic-ai-stacks/ Tue, 07 Oct 2025 16:43:32 +0000 https://netfoundry.io/?p=44842 Secure model context protocol (MCP) connectivity

The post Private MCP Servers: The Missing Link in Secure Agentic AI Stacks appeared first on NetFoundry.

]]>

The true potential of AI agents lies in their ability to interact with the outside world through specialized tools and applications. Whether it’s querying a database, accessing a file system, or calling an API, tools enable agents to take action. The challenge? Exposing these powerful tools often means opening up ports on public endpoints, which can be a significant security risk.

What if you could give your agent access to powerful, private tools without ever exposing them to the public internet? In this post, we’ll walk you through a demonstration of how to create a secure, “dark” connection between an AI agent and its tool server using NetFoundry’s solutions for AI.

Understanding the Stack

Before diving in, let’s take a look at the components we’re working with. The architecture is straightforward and consists of a few key parts:

  • Agent: This is the core software that orchestrates the process. It takes a user’s prompt, uses a model to understand the context and form a plan, and calls the necessary tools to execute that plan. For this demo, we’re using the highly configurable OpenCode. Other popular agents include Claude Desktop and Cursor.
  • Model: The Large Language Model (LLM) that provides the reasoning capabilities. The agent sends the context and available tool information to the model, which then determines how to invoke the tool to fulfill the user’s request. We’ll use a Gemini model for this example.
  • Model Context Protocol Server (MCP): This is our custom MCP server, published as a NetFoundry service, accessible via a private domain name reserved exclusively for authorized NetFoundry endpoints. Typical for MCP servers, it’s a backend service that exposes specific functions the agent will, in turn, offer to the model. We used a simple Python server that calls the GitHub status API when invoked by the model as a “tool” via MCP. The GitHub status API is the “backend” in this case.
  • NetFoundry: This is the secret sauce. It’s a secure networking platform that creates a private, zero-trust overlay network. Instead of the agent calling a public IP address, it calls a secure endpoint on the NetFoundry network, ensuring that the MCP server itself is reachable only by authorized clients, such as our agent host in this example.
    • The MCP server utilizes the OpenZiti Python SDK from NetFoundry to listen on the NetFoundry network, rather than an open TCP port.
    • The agent calls the MCP server via a private domain name resolved by a tunneler running on the same host.

Going Dark: Securing the Connection

The key to this setup is that the MCP server doesn’t need to be public. It can be running on a private machine with no open inbound ports on its firewall.

In the NetFoundry console, we’ve configured a service for our MCP server. The most interesting part is the intercept configuration. We’ve assigned it a private, fictitious domain name appearing in the full MCP server URL: http://tool.mcp.nf.internal:8000/sse. Because this uses the .internal A Top-Level Domain (TLD) reserved by ICANN is guaranteed not to conflict with any public domain. This means only clients with the proper credentials on the NetFoundry network can resolve and connect to this address.

Our MCP server is totally dark from the regular internet. It isn’t listening on an open port; it’s listening only on the NetFoundry overlay network.

The Implementation Details

Let’s look at how this is configured in the code.

First, the agent’s configuration file points directly to our private MCP server’s URL. This is all the agent needs to know to find the tool server on the secure overlay because the NetFoundry tunneler is running on the same host to configure the OS resolver. For this focused example, we’ll rely entirely on the NetFoundry service security, which wraps MCP with mTLS and ensures the server socket is unreachable by unauthorized parties.

{
  "$schema": "https://opencode.ai/config.json",
  "mcp": {
    "simple-mcp-tool": {
      "type": "remote",
      "url": "http://tool.mcp.nf.internal:8000/sse",
      "enabled": false
    }
  }
}

Next, on the server side, we’ve taken a sample tool server built with the Anthropic MCP SDK and made a crucial modification. Using the OpenZiti Python SDK from NetFoundry, we’ve instructed the application to bind to the NetFoundry overlay network instead of a standard network socket. This simple change takes the server off the public internet and places it securely on our private overlay, eliminating the need for a separate VPN, reverse proxy, or port forwarding.

This approach keeps our stack simple, agile, and resilient by controlling access to the outer ring of the application at the transport layer of the network. The MCP server could add another, inner ring of access control, such as OAuth2 over HTTPS.

Let’s See It in Action 🚀

Now for the fun part. In our terminal, the Python MCP server is running. It reports that it’s listening, but a quick check would show it has no open ports on the machine’s network interface. In fact, the server socket is reachable only on the NetFoundry overlay network.

First, we can test the connection directly, confirming that our client can reach the private domain http://tool.mcp.nf.internal:8000/sse. The request is received, so we know the overlay is working.

Next, we run our agent and ask it to list its available tools. The agent communicates with the MCP server (over the NetFoundry service) and discovers our custom tool: check_github_status.

Now, we give the agent a natural language prompt: “Is GitHub okay?”

Here’s what happens behind the scenes:

  1. The agent sends the prompt and the tool’s description to the LLM.
  2. The model, trained to understand these instructions, correctly composes a valid JSON request to call our check_github_status tool.
  3. The agent passes this request to the MCP server over the secure NetFoundry connection.
  4. Our MCP server receives the request, calls the public GitHub status API, and gets the current status.
  5. It sends the result back to the agent.
  6. The agent passes the result to the model, which helpfully summarizes it in a human-readable format: “All systems are normal on GitHub.”

Why This Matters

What we’ve demonstrated is a powerful pattern for building secure and robust AI systems. The agent communicated with a public model but called a completely private tool server over a secure NetFoundry service. That server, in turn, interacted with a public backend API (GitHub).

This architecture enables you to connect your AI agents to sensitive internal resources—such as databases, file systems, or proprietary APIs—without ever exposing them to the public internet. It’s a more straightforward, more secure, and more agile way to build the next generation of AI-powered applications.

Try it out for yourself:

The post Private MCP Servers: The Missing Link in Secure Agentic AI Stacks appeared first on NetFoundry.

]]>
NetFoundry and Siemens partner to simplify zero trust for industrial networking https://netfoundry.io/ot/ot-connectivity/ Mon, 06 Oct 2025 00:18:56 +0000 https://netfoundry.io/?p=44771 Simple, secure OT connectivity…without additional installs This may sound like magic, but it is true.  Simple, secure OT connectivity, without installing additional software or hardware. Secure industrial networking, without the hassle. Want to see it to believe it? Add a couple days to your Oktoberfest to visit NetFoundry at the Siemens booth at it-sa Expo […]

The post NetFoundry and Siemens partner to simplify zero trust for industrial networking appeared first on NetFoundry.

]]>

Simple, secure OT connectivity…without additional installs

This may sound like magic, but it is true.  Simple, secure OT connectivity, without installing additional software or hardware. Secure industrial networking, without the hassle.

Want to see it to believe it? Add a couple days to your Oktoberfest to visit NetFoundry at the Siemens booth at it-sa Expo and Congress, Europe’s largest trade fair for IT security, 7-9 October in Nuremberg. The Siemens booth is 421 in Hall 7.

Not in Germany? Contact NetFoundry for a virtual demo and leave the demo with a party gift – your own zero trust native network, ready for your use in minutes, as a free trial.

What does this industrial networking solution provide?

  • Industrial network discovery, visibility and policy creation
  • Secure remote access to shop floor devices, including just in time (JIT) access, one-time access and agentless access
  • Simple, secure connectivity between OT, IT, edge and cloud
  • Identity microsegmented M2M networking and implementation of zones and conduits, while meeting IEC 62443 and NIS2 guidelines
  • Encryption and OT cell to cell workload segmentation
  • Centralized management, telemetry, identity-based audit logs and reporting

Ok, then what is missing?

  • No dependencies on IP addresses or NAT
  • No open inbound firewall ports in OT firewalls – ever
  • No pinholes through the firewall – ever
  • No dependencies on vendors to bring their own firewalls or VPNs

 

Less is more when the goal is to both simplify OT operations and strengthen industrial networking security. Replace complexity with identity-secured, attribute-based connectivity. It is simple to implement and simplifies operations – unlike bolted-on, dead on arrival, day two ‘zero trust’ approach.

That can’t be true!

It is true. But, there is a catch.

“No software or hardware install” applies for Siemens OT environments. This is because Siemens SCALANCE and Siemens SINEC Secure Connect now include NetFoundry’s zero trust networking software

Great news for much of the world since Siemens is one of the world’s top industrial automation companies.

Are you out in the cold if you don’t use Siemens?

Siemens makes industrial networking super simple since the NetFoundry software is already included.

However, NetFoundry makes OT connectivity and industrial networking easy for anyone. Choose the approach which works for your needs:

  • Agentless solutions for third-party remote access, which still provide strong identity and authentication
  • Choice of one-time, just in time (JIT) and continually authenticated access models, including zero trust access
  • Solutions for OT-IT convergence, edge compute and machine to cloud which run on existing infrastructure
  • M2M and cell to cell connectivity, including segmentation between industrial network cells and zones

 

NetFoundry solutions are deployable as software, including even air-gapped sites, as well as on-prem, hybrid, distributed and cloud.

Who can use this NetFoundry industrial networking solution?

Probably you! NetFoundry securely delivers billions of sessions per year, including for critical infrastructure on three continents.  NetFoundry provides both products and a platform:

How do I deploy NetFoundry?

NetFoundry is deployed in three main ways:

  1. Pre-integrated. In cases like Siemens, NetFoundry software is already on the OT device, PLC, cell edge compute or firewall. To extend that connection to other zones, IT, edge, vendors or cloud, NetFoundry provides agentless and software-based solutions.
  2. On-prem. NetFoundry is deployed in on-premises models, including support for air-gapped sites or sites which do not want to depend on external connectivity, such as many manufacturing and energy sites. There are agentless and software-only solutions for this option also – either with existing infrastructure, or via standalone containers or virtual machines, depending on operational preference.
  3. Hybrid and cloud. NetFoundry provides dedicated, zero trust overlays, spanning over 100 data centers, with optimized performance, enterprise SLAs and 24×7 support. This is ideal for secure remote access, vendor connections, B2B connections and cloud connections because you don’t need to support new sites – you extend via NetFoundry managed routers, dedicated to your network.

 

The third option is the ‘cloud model’ for secure networking – like getting a private VPC or VNet without managing the underlying infrastructure, you get a private zero trust network, without managing the underlying infrastructure. However, unlike SASE clouds or CDNs, each network is dedicated and end-to-end encrypted to ensure that intermediate nodes and network operators have no access to the data.

How do I get started with this solution for OT connectivity and industrial networking?

Visit NetFoundry at the Siemens booth at it-sa Expo and Congress, Europe’s largest trade fair for IT security, 7-9 October in Nuremberg. The Siemens booth is 421 in Hall 7.

Contact NetFoundry for a virtual demo and leave the demo with a party gift – your own zero trust native network, ready for your use in minutes, as a free trial.

The post NetFoundry and Siemens partner to simplify zero trust for industrial networking appeared first on NetFoundry.

]]>
Lessons from DEF CON 33: Why Zero Trust Overlays Must Be Built In, Not Bolted On https://netfoundry.io/zero-trust/lessons-from-def-con-33-why-zero-trust-overlays-must-be-built-in-not-bolted-on/ Fri, 15 Aug 2025 00:37:01 +0000 https://netfoundry.io/?p=44227 At DEF CON 33 (Las Vegas, August 7-10, 2025), AmberWolf researchers disclosed critical vulnerabilities in major ZTNA (Zero Trust Network Access) products such as Zscaler, Netskope, and Check Point’s Perimeter 81. Highlights of the issues: These flaws stem not from cryptographic weaknesses but from poor secret management, shared credentials, and exposed diagnostic services. They enable impersonation and full-service access through misuse […]

The post Lessons from DEF CON 33: Why Zero Trust Overlays Must Be Built In, Not Bolted On appeared first on NetFoundry.

]]>
At DEF CON 33 (Las Vegas, August 7-10, 2025), AmberWolf researchers disclosed critical vulnerabilities in major ZTNA (Zero Trust Network Access) products such as Zscaler, Netskope, and Check Point’s Perimeter 81. Highlights of the issues:

  • Zscaler: A SAML authentication bypass (CVE-2025-54982) where SAML assertions were not properly signature-validated.
    Layman’s analogy: Like accepting a signed contract without checking if the signature is real.
  • Netskope: An authentication bypass in IdP enrollment (CVE-2024-7401), cross-tenant user impersonation via a non-revocable OrgKey, and privilege escalation through a rogue server. Many organizations remained exposed 16 months after disclosure.
    Layman’s analogy: Like giving someone a master key to an apartment building, never being able to take it back, and leaving the back door propped open for over a year.
  • Check Point Perimeter 81: Hard-coded SFTP credentials that exposed multi-tenant logs, including JWT material that could be reused for authentication.
    Layman’s analogy: Like hiding the spare key under the doormat of an office building, along with a list of employee badges, so anyone who finds it can walk in and pretend to be any employee.

These flaws stem not from cryptographic weaknesses but from poor secret management, shared credentials, and exposed diagnostic services. They enable impersonation and full-service access through misuse of JWTs, but not by breaking crypto.

The root cause was inadequate zero-trust implementation. These systems placed excessive reliance on external IdPs, using them in ways they were not designed for or making them the sole gatekeeper of trust. In many cases, authentication was added after connectivity was established, contradicting the zero-trust principle of “authenticate before connect.” This approach leaves gaps in emerging use cases such as multi-cloud, edge, IoT, and OT, where continuous, pre-connection trust enforcement is critical.

Built-In Zero Trust vs. Bolt-On Identity

Many ZTNA solutions treat zero trust as a feature added onto an existing network, leaning heavily on external identity providers for access decisions. This “bolt-on” approach often:

  • Makes trust decisions after a connection is established, not before.
  • Relies on shared static keys or tokens between tenants.
  • Exposes public service endpoints that can be scanned and attacked.

In contrast, a zero trust overlay built around strong, intrinsic identity enforces security from the first packet, and goes beyond user or device authentication to secure every service and every hop in the connection. Platforms such as NetFoundry embed zero trust principles directly into the network fabric:

  • Per-service X.509 certificates: each service has its own cryptographic identity, ensuring that compromise of one service does not affect others
  • Different keys for every mTLS hop: traffic is re-encrypted at each overlay connection, eliminating replay attacks and limiting exposure even if one hop is compromised
  • End-to-end encryption at the service layer (E2EE): data remains encrypted from source to destination, with no point in the overlay able to decrypt it unless explicitly authorized
  • No shared static keys: every identity is unique, preventing tenant-to-tenant pivoting
  • No public service endpoints: services are invisible to the internet, removing entire categories of attack surface
  • Integrated policies and segmentation: enforced inside the overlay without relying on external redirects or loosely coupled IdP logic

NetFoundry also supports integrating standards-based identity providers through OIDC (OpenID Connect) and SCIM (System for Cross-domain Identity Management) for automated user and group provisioning. These standards can be used as a replacement primary authentication method or as additional secondary authentication, much like BYOPKI.

This flexibility lets organisations leverage existing SSO workflows and automate identity lifecycle management without weakening the overlay’s core security model. Even when OIDC and SCIM are in play, NetFoundry continues to enforce per-service X.509 certificates, unique mTLS keys per hop, and end-to-end service encryption. The overlay remains “closed-by-default,” with identity-before-connect enforced independently of the IdP’s availability or trust chain.

Beyond Remote Access: Consistent Zero Trust Everywhere

Because NetFoundry’s overlay enforces identity using X.509/PKI at the fabric level, it can be applied to any connectivity use case, and not just remote user access. Whether securing multi-cloud workloads, edge applications, IoT deployments, or operational technology (OT) environments, the same user-, device-, and service-aware policies are applied to all traffic.

This contrasts sharply with tunnel-level ZTNA, which typically limits identity enforcement to remote access scenarios or applies it inconsistently outside the client-initiated path. The difference becomes especially critical in non-human-initiated (NHI) cases, such as machine-to-machine communications in OT or cloud-native multi-cloud. This is where traditional ZTNA often fails to authenticate and authorise every connection consistently.

With NetFoundry, every connection, in every direction, is authenticated and authorised before it exists, whether initiated by a person, a workload, or a machine.

Why It Matters to Security Leaders

  • For CISOs and CIOs: Built-in zero trust with per-service cryptographic identity, hop-by-hop mTLS, and end-to-end service encryption reduces breach risk from stolen tokens, static keys, or compromised IdPs.
  • For Network Architects and Security Engineers: Identity-based segmentation is enforced by the overlay, independent of your IdP, while still integrating cleanly with OIDC and SCIM for authentication and provisioning.
  • For Compliance and Governance Teams: Support for open standards (OIDC, SCIM, PKI) and closed-by-default design makes it easier to meet NIST Zero Trust Architecture and CISA Zero Trust Maturity Model requirements, while maintaining operational agility.
  • For OT and IoT Security Teams: Consistent identity enforcement across remote access, multi-cloud, edge, and machine-to-machine traffic, including non-human-initiated connections in OT, ensures the same zero trust policies apply everywhere, not just in client-initiated scenarios.

Key Takeaways

  • Bolt-on zero trust can be bypassed: built-in identity, per-service certificates, and enforced policy cannot.
  • Static, shared keys create multi-tenant blast radii: unique keys for each service and every mTLS hop eliminate this risk.
  • Public endpoints invite attacks: closed-by-default overlays and hidden services remove the target entirely.
  • External IdPs can fail or be compromised: optional OIDC and SCIM integration adds convenience without creating dependency.
  • Zero trust is an architecture, not a checkbox: it must be enforced before connection, with no exceptions, and secured end-to-end at the service layer.

The bottom line: The DEF CON 33 disclosures highlight the risks of retrofitting zero trust into architectures that were not designed for it. Established vendors often extend existing products to address emerging requirements, which can lead to a bolt-on effect that preserves legacy design choices. In contrast, newer and more focused providers have the advantage of building from the ground up, embedding per-service cryptographic identity, hop-by-hop mTLS, and end-to-end service encryption directly into the network fabric. With NetFoundry, IdP integration is optional rather than mandatory, and OIDC and SCIM support can be added without weakening the closed-by-default, authenticate-before-connect architecture. Because identity is enforced at the fabric level, zero trust policies are applied consistently across all use cases, including remote access, multi-cloud, edge, IoT, and machine-to-machine traffic in OT environments. As demands evolve, incumbents may need to re-engineer their platforms, while solutions built on a zero trust foundation from the start are already aligned with those future needs.

Ready to see built-in zero trust in action?

Experience how NetFoundry enforces identity-before-connect, across every connection and every use case, without the weaknesses of bolt-on ZTNA. Start your free trial or book a live demo with our team today.

About NetFoundry

Thousands of businesses, including 2 of the largest 5 in the world, use NetFoundry to securely connect any workflow, via NetFoundry NaaS, on-premises and partner models, replacing anything from VPNs to SD-WANs. NetFoundry’s overlays are the first to be driven by built-in, cryptographically authenticated identities for humans and non-humans (NHI for devices, AIs, OT). Providers use NetFoundry to embed zero trust in their products in an OEM model. NetFoundry is the inventor and maintainer of the world’s most used open source zero trust platform, OpenZiti. Start a free trial, book a live demo or learn more

The post Lessons from DEF CON 33: Why Zero Trust Overlays Must Be Built In, Not Bolted On appeared first on NetFoundry.

]]>
Comparing NetFoundry and OpenZiti https://netfoundry.io/ziti-openziti/comparing-netfoundry-and-openziti/ Thu, 14 Aug 2025 13:58:31 +0000 https://netfoundry.io/?p=44200 Because both NetFoundry products and OpenZiti software have skyrocketed in popularity (NetFoundry now delivers billions of sessions per month), we are often asked for a short summary comparison. So, a blog post.  Although NetFoundry has options ranging from air-gapped to multicloud native NaaS versions, this post focuses on the NaaS, since it is the most […]

The post Comparing NetFoundry and OpenZiti appeared first on NetFoundry.

]]>
Because both NetFoundry products and OpenZiti software have skyrocketed in popularity (NetFoundry now delivers billions of sessions per month), we are often asked for a short summary comparison. So, a blog post. 

Although NetFoundry has options ranging from air-gapped to multicloud native NaaS versions, this post focuses on the NaaS, since it is the most asked about. Likewise, NetFoundry products include white-label, resell and OEM options, but we’ll also keep those separate.

TL;DR — NetFoundry invented, developed, open sourced and continues to lead and maintain OpenZiti software. NetFoundry sells the world’s leading zero trust native networking products, which include:

  • OpenZiti software
  • NetFoundry patented software
  • NetFoundry-managed networks (supporting billions of sessions per month)
  • Enterprise support (24×7) and SLAs (up to 99.95%)
  • Air-gapped and on-prem solutions (including FIPS-compliant) 
  • Enterprise and OEM functionality (including white-label options)
  • Certifications and compliance
  • Third-party integrations

OpenZiti is software – it is not a product, but it is great for home use, non-production use cases and learning about some of NetFoundry’s capabilities.  NetFoundry provides products.

NetFoundry NaaS products

NetFoundry zero trust native overlays can be deployed as fully managed NaaS, hybrid, on-premises or air-gapped.  NetFoundry products are used by two of the largest five companies in the world, governments and critical infrastructure on three continents.  

NetFoundry also provides OEM and white label products for providers to embed zero trust into their products.  However, this post is about NetFoundry NaaS and focuses mainly on unique NetFoundry product capabilities (but all the other OpenZiti software capabilities are included in NetFoundry products).

Why NetFoundry NaaS (in plain terms)

Outcomes & assurances

  • Proven at billions of sessions/month with private, customer-dedicated overlays (not shared between customers).
    24×7 support and SLA up to 99.95%.
  • Global reach with operations across 100+ PoPs for consistent performance.
  • Deploy anywhere: extend overlays via agentless and endpoint options; host endpoints and site gateways with one-click deploy/suspend on AWS, Azure, GCP, OCI.

Privacy, sovereignty & compliance

  • End-to-end encryption with key sovereignty at your endpoints—NetFoundry cannot access customer data.
  • Dedicated instances by default; customers may make them multi-tenant, but overlays are never shared across customers.
  • SOC 2 Type II. Guidance and options for NIST 800-171, FFIEC/COBIT, FIPS, CJIS, HIPAA, PCI DSS, FedRAMP/GovCloud, NIS2, IEC 62443, NERC CIP, EU CRA, DORA.
  • Crypto choices: high-performance libsodium (default) with the ability to toggle FIPS-compliant and post-quantum encryption modes.

Operate from day one

  • Powerful console & abstractions: service-centric policies and automation reduce toil—no DIY dashboards or glue code. Built in RBAC and full APIs so you keep control.
  • Instant org & tenant bootstrap: we auto-create your Organization, Network Group, and first Network at signup—no YAML, scripts, or CLI loops—so teams can onboard the same day.
  • Multi-tenant (your way): every customer gets a private, dedicated instance; within it you can structure your own tenants for lines of business, customers, or environments.
  • White-label & vanity domains: theme the console, BYO DNS/TLS, present a branded experience.
  • Billing & usage metering: built-in dashboards for chargeback/showback—see consumption by app, team, or tenant.
  • Rich telemetry & audit logs: including which people/devices consumed which services and applications (with usage insights).

For Dev & DevOps: What “Managed” Actually Gets You 

Reliability, performance & scale (how it’s delivered)

  • HA by design across control and data planes, with ingress/egress load balancing and auto-scaling.
    NetFoundry-operated control/data planes across 100+ PoPs; endpoints/routers dynamically optimize paths to use the best connections.
  • Hardened, tested, certified OpenZiti endpoints for any OS, supplemented with eBPF and endpoint wrappers.

Management, identity, authn/authz

  • All-batteries-included: identity, authentication, authorization; turnkey IdP (Auth0, OIDC) and CA integrations with centralized control.
    Automated provisioning via SCIM; universal identities for humans and NHIs; integrations with Keycloak, CyberArk, and support for SPIFFE.
  • API-first + RBAC: fine-grained token scopes, service accounts, user/role admin, audit trails.
  • Multi-tenant, your way: private instance for your org; model tenants inside it (by LOB/customer/env) with hierarchical RBAC and policy boundaries.

Platform services you don’t have to build

  • Dedicated, managed PKI: automated per-network hierarchies with optional external CA chaining.
  • Lifecycle ops as buttons: start/stop, suspend/resume, rolling upgrades, backups with safe defaults and rollbacks; updates & upgrades are automated with customer-controlled windows.
  • Instant org & tenant bootstrap: on signup we create your Organization → Network Group → first Network—no YAML/scripts/CLI loops.

Reach & access workflows

  • Choose agentless (zero trust proxy / reverse proxy) or endpoint-based connectivity.
  • Host endpoints and site gateways (edge routers) with one-click lifecycle in AWS/Azure/GCP/OCI.
  • Access models: JIT, one-time, time-bound, persistent, integrated with leading ticket/workflow systems.

Observability, compliance modes & APIs

  • Dashboards, traces, alerts, audit logs, and usage out of the box.
  • Crypto modes on demand: toggle FIPS-compliant and post-quantum options as needed.
  • API-first + RBAC for CI/CD and ops automation.

Start in Minutes

Whether you’re connecting SaaS to private data, securing AI/agent traffic, or segmenting OT networks, NetFoundry gets you there faster and more securely.

Start a free trial or book a live demo and ship your next secure service without the network complexity.

Notes

  • This post focuses on global, NetFoundry-hosted and managed NaaS products, but NetFoundry also provides hybrid, on-premises and air-gapped products. 
  • This includes providing a management solution which is based on NetFoundry’s global NaaS management suite (which manages billions of sessions per month) and customized for on-premises use cases.  
  • Likewise, NetFoundry enables partners to integrate NetFoundry software into products in partner, OEM and white-label models, creating secure by design products (built-in zero trust connectivity and networking).

About NetFoundry

Thousands of businesses, including 2 of the largest 5 in the world, use NetFoundry to securely connect any workflow, via NetFoundry NaaS, on-premises and partner models, replacing anything from VPNs to SD-WANs. NetFoundry’s overlays are the first to be driven by built-in, cryptographically authenticated identities for humans and non-humans (NHI for devices, AIs, OT). Providers use NetFoundry to embed zero trust in their products in an OEM model. NetFoundry is the inventor and maintainer of the world’s most used open source zero trust platform, OpenZiti. Start a free trial, book a live demo or learn more

The post Comparing NetFoundry and OpenZiti appeared first on NetFoundry.

]]>
AI in Manufacturing https://netfoundry.io/ai/ai-in-manufacturing/ Mon, 28 Jul 2025 17:42:09 +0000 https://netfoundry.io/?p=43919 At-Scale AI with MCP for Manufacturing Model Context Protocol (MCP) is rapidly becoming a foundational fabric for industrial AI and agent-based architectures. Manufacturing environments require ultra-secure, high-performance, and reliable communications between MCP clients and MCP servers—whether they reside in production lines, engineering systems, or cloud-based analytics platforms. This post illustrates how MCP security can be […]

The post AI in Manufacturing appeared first on NetFoundry.

]]>

At-Scale AI with MCP for Manufacturing

Model Context Protocol (MCP) is rapidly becoming a foundational fabric for industrial AI and agent-based architectures. Manufacturing environments require ultra-secure, high-performance, and reliable communications between MCP clients and MCP servers—whether they reside in production lines, engineering systems, or cloud-based analytics platforms.

This post illustrates how MCP security can be enhanced for manufacturing environments using NetFoundry’s zero trust overlay networks. We describe both SDK-based (greenfield) and agent/gateway-based (brownfield) architectures for common manufacturing use cases.

The MCP Dilemma in Manufacturing

MCP enables AI agents and tools to coordinate effectively. However, MCP servers typically must accept inbound connections, which presents a major attack surface, particularly in brownfield OT and hybrid IT/OT environments.

While OAuth helps authenticate and authorize these connections at layer 7, it doesn’t prevent inbound network access – the MCP server receives the request, and then determines if it is authorized, meaning the MCP server is a reachable attack surface. This is a problem in high-stakes industrial environments where uptime, IP protection, and safety are paramount.

NetFoundry addresses this problem by removing the MCP server from the network entirely. MCP servers connect outbound to a NetFoundry overlay, and only authenticated, policy-authorized sessions can reach them. This eliminates the key attack surface.

MCP Security: OAuth + NetFoundry

By combining OAuth at the application layer and NetFoundry at the network layer, manufacturers achieve defense-in-depth. An attacker must compromise two independently managed and cryptographically strong systems to access any sensitive interface.

NetFoundry gateways use strong identity, mTLS, and centralized policy to authorize each session, without relying on IP whitelisting, firewall rules, or VPNs.

Deployment Options: NetFoundry SDK or Agent/Gateway

Greenfield (SDK-based): Embed the NetFoundry SDK directly into MCP clients and servers. Ideal for modern applications like LangGraph, Litmus IO, and SLIM.AI. There are no inbound listeners; all communications are outbound and identity-authenticated.

Brownfield (Agent/Gateway-based): Use NetFoundry tunnelers or agents on OT/IT systems or edge gateways. No application changes required. Ideal for environments where MCP runs inside legacy devices, control systems, or third-party software.

Manufacturing Use Cases: SDK-based Zero Trust

Predictive Maintenance AI for Industrial Robots

Scenario: An LLM-powered diagnostic agent collects telemetry from robotic arms and predicts failure probabilities.

Solution: Each robot edge node runs a Go-based agent that embeds the NetFoundry SDK. These agents dial a NetFoundry service (e.g. robot-diagnostics-ingest) hosted in the enterprise analytics platform.

Result: All robot data flows are outbound-only, encrypted, and authorized. No ports are open in the robots’ local subnets.

Zero-Trust MES to ERP Coordination

Scenario: A manufacturing execution system (MES) needs to send production updates to an ERP system in the corporate cloud.

Solution: Both the MES and ERP coordination agent are built in Go and embed the NetFoundry SDK. The MES agent dials a private NetFoundry service defined for the ERP bridge.

Result: Secure, outbound-only, identity-enforced updates with centralized logging. No firewall holes or VPN tunnels required.

AI-Based Quality Inspection at the Edge

Scenario: An AI model classifies defects from high-speed camera images and sends metadata to a central quality system.

Solution: The inspection agent embeds the NetFoundry SDK (e.g., Go-based, Python- based, C-based, Java-based, or .NET-based) and sends metadata to a private NetFoundry service, eliminating the need to expose the edge camera server to the network.

Result: Image processing stays local, metadata flows securely. No surface area forunauthorized network access.

Secure Tooling Feedback Loop for CNC Systems

Scenario: CNC machines report wear data to an LLM agent that recommends tool change intervals.

Solution: A CNC-side agent with the NetFoundry SDK sends data to a backend tool-optimization service, which also uses the SDK.

Result: Zero-trust communication between tooling systems and optimization AI.

Energy Optimization Agent for Factory Microgrids

Scenario: An LLM-based energy advisor queries local meters and suggests dynamic load-shifting plans.

Solution: All queries and responses flow through NetFoundry SDK-secured services between energy sensors and the optimization backend.

Result: Energy data remains protected from lateral movement attacks or unauthorized access.

Manufacturing Use Cases: Agent/Gateway-Based Zero Trust

AI-Driven Maintenance Scheduling Across Facilities

Scenario: Maintenance planners use AI to optimize schedules across factories.

Solution: MCP agents at each plant connect outbound via NetFoundry agents to a centralized AI engine. No inbound connectivity required.

Legacy SCADA to AI Bridge

Scenario: An older SCADA system streams OT data to an AI system for anomaly detection.

Solution: A NetFoundry edge gateway at the SCADA site routes traffic securely to an AI backend. SCADA system remains untouched.

Supplier Quality Collaboration

Scenario: Suppliers send part traceability data to an OEM AI model.

Solution: Each supplier runs a NetFoundry agent that dials a private endpoint exposed only to verified supplier identities.

Secure Digital Twin Synchronization

Scenario: Digital twin systems in plants synchronize with cloud-based simulation models.

Solution: NetFoundry gateway proxies manage secure synchronization traffic without exposing either endpoint.

Connected Worker AI Assistants

Scenario: Technicians use wearable devices to access LLM-powered assistance.

Solution: Wearables route requests to backend AI services over NetFoundry, using agent-based identity enforcement.

Conclusion

Manufacturing environments require robust, non-intrusive, and standards-compliant solutions to secure AI/LLM-based workflows. NetFoundry’s SDK and agent/gateway options enable MCP-based systems to operate without exposing MCP servers to the network, ensuring operational resilience, IP protection, and zero-trust compliance.

The post AI in Manufacturing appeared first on NetFoundry.

]]>
AI in Financial Services https://netfoundry.io/ai/ai-in-financial-services/ Mon, 28 Jul 2025 17:41:54 +0000 https://netfoundry.io/?p=43910 At-Scale AI with MCP in Financial Services: Securing Banking and Fintech Workflowswith NetFoundry Model Context Protocol (MCP) is rapidly becoming essential to orchestrating secure, context-rich, AI-driven workflows in regulated sectors such as financial services. Banks and fintechs are deploying AI agents for everything from real-time risk scoring to dynamic fraud prevention, but these innovations introduce […]

The post AI in Financial Services appeared first on NetFoundry.

]]>

At-Scale AI with MCP in Financial Services: Securing Banking and Fintech Workflows
with NetFoundry

Model Context Protocol (MCP) is rapidly becoming essential to orchestrating secure, context-rich, AI-driven workflows in regulated sectors such as financial services. Banks and fintechs are deploying AI agents for everything from real-time risk scoring to dynamic fraud prevention, but these innovations introduce new attack surfaces. MCP agents and servers must communicate across zones with high assurance, and often across partner, cloud, and on-prem boundaries.

In this document, we demonstrate how NetFoundry’s zero trust overlay architecture secures these interactions at the network layer, complementing OAuth 2.1 at the application layer. We provide both SDK-based (greenfield) and agent/gateway-based (brownfield) implementations for real-world financial use cases, while covering key NetFoundry capabilities in identity, policy enforcement, posture checking, observability, and performance.

The Risk: AI + MCP Without Network-Level Security

OAuth 2.1 can secure MCP payloads, but leaves systems exposed to: – Inbound port scanning – Authorization logic flaws – Session replay or credential theft – Inadvertent data exposure due to network misconfiguration.

MCP servers, even if OAuth-protected, are often exposed to underlay networks for initial handshakes. That surface is unacceptable in zero trust financial environments.

NetFoundry: Defense-in-Depth for MCP Workflows

NetFoundry adds a security layer below MCP and OAuth:

LayerResponsibility
Layer 7OAuth 2.1/OIDC for user delegation, session tokens
Layer 4/3NetFoundry overlays with identity, authZ, posture, routing, telemetry

With NetFoundry: – MCP servers do not have any exposed ports – All connections are outbound-only – Every flow is protected by mTLS, including SPIFFE SVIDs, or JWT-based identities from external providers – Access is policy-gated with real-time posture checks, MFA, and JIT session authorization – Observability includes per-identity audit logs, performance metrics, and connection traces.

Agent/Gateway-Based Use Cases (Brownfield MCP Systems)

1. Secure Batch Processing from Mainframe to AI Analytics

Regulations: FFIEC, COBIT, DORA

OAuth 2.1 authenticates batch workloads. NetFoundry gateway tunnels shield mainframe-originating sessions and allow only verified identities to access analytics MCP endpoints.

2. Bank-to-Fintech API Gateways (PSD2, OpenBanking)

Regulations: PSD2, EBA RTS, DORA

OAuth 2.1 governs access tokens for payment initiation. NetFoundry ensures fintech clients reach only the authorized APIs via overlay sessions.

3. Model Ops (MLOps) Pipelines in Hybrid Clouds

Regulations: ISO 27017, DORA

OAuth 2.1 secures GitOps CI/CD flows. NetFoundry provides overlay-based build agent connectivity, with session scoping tied to pipeline runs and traceability.

4. Third-Party AML Tool Ingestion

Regulations: BSA/AML, DORA

OAuth 2.1 handles auth between the third-party MCP client and compliance services. NetFoundry enforces zone isolation and policy-per-client controls.

5. AI-Driven Customer Support for Credit Disputes

Regulations: GDPR, CCPA, SOC 2

OAuth 2.1 manages customer token lifecycle. NetFoundry prevents external help desk services from reaching core systems outside of authenticated, scoped, JIT sessions.

Conclusion: NetFoundry in Financial MCP Architectures

NetFoundry offers: – Strong identity via PKI, including SPIFFE SVIDs, and/or JWT-based integration with external providers, per service or per endpoint – Authorization policies including role, device posture, compliance tier – mTLS with E2EE for all flows – Just-in-time (JIT) access grants – Multi-factor authentication for sensitive operations – Full observability with per-session logs and latency metrics – Flexible deployment (NaaS, on-prem, hybrid) for regulatory and operational fit.

Used in conjunction with OAuth 2.1, NetFoundry enforces defense-in-depth at both the application and network levels—ensuring MCP-based systems meet the compliance and resilience needs of modern financial environments.

The post AI in Financial Services appeared first on NetFoundry.

]]>
Zero trust AI with NetFoundry – common use cases https://netfoundry.io/ai/ai-examples/ Sun, 27 Jul 2025 23:26:51 +0000 https://netfoundry.io/?p=43906 Free Trial Example 2: Securing a Self-Hosted LLM Web UI Example 3: Collaborative AI/ML Development Environment Example 4: Protect Public-Facing AI Application with Zero Trust Access Example 5: Hiding a Self-Hosted Inference API with NetFoundry Frontdoor Example 6: Developing a Chatbot with Local LLM and Cloud Webhooks Example 7: Live Demo of a Gradio/Streamlit AI […]

The post Zero trust AI with NetFoundry – common use cases appeared first on NetFoundry.

]]>

Using NetFoundry for simple, secure AI

While NetFoundry securely delivers the most sophisticated AI use cases, such as the use of AI and MCP in healthcare,  NetFoundry can also be used to simply and securely deliver more common AI use cases. 

In minutes, use NetFoundry to get a private AI connection, without the hassle of VPNs, or dependencies on IP addresses and NAT. Here are some of the most common examples of using NetFoundry to securely deliver important AI use cases:

Example 1: Accessing a Local LLM from Anywhere

      • Context: A developer is running a powerful LLM (e.g., Llama 3 via Ollama) on their desktop computer at home. They want to experiment with it and access its API from their laptop while traveling.

      • Implementation: They install NetFoundry agents on both their home desktop and their laptop. Both devices are now part of a private NetFoundry overlay network with secure identities. From the laptop, they can access the Ollama API simply by using the desktop’s identity.

      • Result: Secure, private access to a home-lab LLM without any complex firewall, NAT, port forwarding or dynamic DNS configuration.

    Example 2: Securing a Self-Hosted LLM Web UI

        • Context: A team sets up a web interface like ollama-webui or Chatbot-UI to interact with their internal LLM. They don’t want this UI to be accessible from the public internet, even with a password.

        • Implementation: Install a NetFoundry agent on the server hosting the web UI and to the laptops and mobiles of team members. UI access is now seamless and their other sessions are not impacted.

        • Result: The UI is completely inaccessible to and unreachable from the Internet or any underlay network This prevents credential stuffing attacks and unauthorized access. Optionally, NetFoundry can further simplify this by handling the certificates and encryption.

      Example 3: Collaborative AI/ML Development Environment

          • Context: A small research team is working on a project. The training dataset resides on a NAS in their office, the Jupyter notebooks are run on their individual laptops, and the model training is done on a powerful GPU instance in a cloud provider like Vast.ai or Lambda Labs.

          • Implementation: They install NetFoundry on the NAS, their laptops, and the cloud GPU instance. Now all resources can communicate over a secure, private network using secure NetFoundry identities, which are independent of IP addresses and networks. The Jupyter notebook can directly mount the dataset from the NAS, and code can be seamlessly pushed to the GPU instance for training runs.

          • Result: The team gets a secure, unified development environment across hybrid infrastructure without the overhead of setting up and managing VPNs or being dependent on IP addresses for identity or routing.

        Example 4: Protect Public-Facing AI Application with Zero Trust Access

            • Context: A company has built a custom “Ask our Docs” AI chatbot that is deployed on a server. They want to make it available to all employees, but not the general public, and they don’t want the server to be reachable from the Internet or underlay networks.

            • Implementation: They point a public subdomain to NetFoundry Frontdoor. NetFoundry policy will require any user visiting the URL to first authenticate with their corporate identity provider.

            • Result: Only authenticated employees can access the AI tool. The NetFoundry cloud protects the app from unauthorized use and bots.

          Example 5: Hiding a Self-Hosted Inference API with NetFoundry Frontdoor

              • Context: A research team has a fine-tuned model running on a server in their office. They don’t have a static IP and their firewall blocks all inbound traffic. They need to provide API access to a partner.

              • Implementation: They install the NetFoundry Frontdoor daemon on the server. The daemon creates a secure, persistent, outbound-only tunnel from their server to the NetFoundry overlay, specifically for this API.

              • Result: The partner can send API requests to the public hostname, and NetFoundry securely tunnels the traffic to the office server. The office firewall remains completely locked down, with no inbound access.

            Example 6: Developing a Chatbot with Local LLM and Cloud Webhooks

                • Context: A developer is building a Microsoft Teams bot that gets its intelligence from a locally running instance of Ollama. To receive messages from Teams, their local application needs a publicly accessible HTTPS endpoint to receive webhook calls.

                • Implementation: The developer runs their bot application on localhost. NetFoundry Frontdoor provides a public URL that securely tunnels traffic to their local application, but requires NetFoundry-enabled IdP authentication (or another method which the developer team can choose). They paste this URL into the Teams developer portal.

                • Result: The developer can easily test the full end-to-end flow of their AI bot in real-time without deploying their code to a cloud server.

              Example 7: Live Demo of a Gradio/Streamlit AI App, run locally

                  • Context: A data scientist has created an interactive AI application using Gradio to showcase a new image generation model. They want to show it to a colleague in a different office or location for immediate feedback.

                  • Implementation: They run the Gradio app locally, which starts a web server. NetFoundry restricts web server access to localhost and provides a public, authentication protected URL.

                  • Result: Colleagues interact with the AI application live in their browser, while it runs entirely on the data scientist’s laptop. This is faster and easier than containerizing the app and deploying it to the cloud for a quick demo.

                Example 8: Securing a Temporary Shared Endpoint with OAuth

                    • Context: A dev wants to share their local LLM-powered data analysis tool with a few trusted colleagues for a day or short amount of time.

                    • Implementation: They run the NetFoundry daemon locally at the shared endpoint site, and map a URL.

                  • Result: When colleagues visit the URL, they are first forced to authenticate with an OAuth provider. NetFoundry only forwards the request upon successful authentication.
                   
                  To use NetFoundry to securely deliver your AI use cases, start here with a
                  free trial.

                   

                  The post Zero trust AI with NetFoundry – common use cases appeared first on NetFoundry.

                  ]]>
                  AI in Healthcare https://netfoundry.io/ai/ai-in-healthcare/ Fri, 25 Jul 2025 21:05:02 +0000 https://netfoundry.io/?p=43840 At-scale AI with MCP Model Context Protocol (MCP) is one of the most promising frameworks for at-scale AI. However, like most emerging technologies, there are operational, reliability and security issues to iron out. The good news is we can get the operational efficiency, enterprise-grade reliability and military-level security we need, even though it is still […]

                  The post AI in Healthcare appeared first on NetFoundry.

                  ]]>

                  At-scale AI with MCP

                  Model Context Protocol (MCP) is one of the most promising frameworks for at-scale AI.

                  However, like most emerging technologies, there are operational, reliability and security issues to iron out. The good news is we can get the operational efficiency, enterprise-grade reliability and military-level security we need, even though it is still the early days.

                  This post describes the solution. It then illustrates MCP security with healthcare use cases, but the themes apply for any MCP implementation. Posts on the use of MCP-based AI solutions for other areas such as manufacturing and finance will follow.

                  The MCP dilemma

                  Adoption of emerging OAuth standards will help secure the interactions between MCP agents and MCP servers.

                  However, even with the emerging OAuth implementations, the MCP server is still reachable from the networks. On a sunny day, the MCP server gets a request from an MCP agent, determines it is not authorized, and returns a 403. Not every day is sunny. On a rainy day, the MCP server is attacked from the network.

                  We could try to shield the MCP server from the network– for example, only permit certain IP addresses, or mandate that all the MCP agents run split tunnel VPN clients. However, even in a non-AI world, VPNs and IP whitelisting have proven to be operational nightmares with inconsistent reliability and performance. It is safe to say the speed, scale and lack of determinism of AI and MCP will shatter VPN and firewall ACL based methods.

                  So, how do we innovate with MCP while getting the operational efficiency, reliability, performance and security we need? How do we do AI and MCP at scale, and how do we even start without making our MCP servers into targets, or spending more time with networking and security teams than it takes to code the MCPs?

                  NetFoundry for MCP

                  One solution is to take the MCP servers off of the networks. Make them unreachable from the underlay networks – the ultimate simplification and security.

                  NetFoundry does this by identifying, authenticating and authorizing each session before it is allowed on an MCP-specific overlay network, and opening authorized sessions outbound from the MCP server.

                  In the NetFoundry architecture, there is no such thing as an MCP server listening to the underlay network – it is not reachable in that context. Think of it as a private, zero trust MCP enclave, defined by you, and enforced with modern cryptography. Done at the same speed as spinning up containers or VMs.

                  MCP security: OAuth (layer 7) + NetFoundry (layer 3)

                  NetFoundry provides an independent layer of identity, authentication and authorization, applying it at layer 3, instead of layer 7 (where OAuth functions). This is a simple but powerful combination – it means an attacker needs to breach two independent security implementations, each which are based on strong identities and modern cryptography. It is defense in depth without the complexity.

                  Because NetFoundry authorizes each MCP session – and makes the MCP server unreachable from the underlay networks – it shields OAuth from its greatest weaknesses – such as identity theft, compromised credentials and authorization bugs. This is because the attacker can’t reach the MCP server to begin with, even if the attacker is controlling an OAuth identity or found a bug. Meanwhile, if NetFoundry is breached, then the attacker still needs to compromise OAuth.

                  Although NetFoundry operates at layer 3, it doesn’t rely on layer 3 constructs like IP address based schemes. This is why NetFoundry is so simple to implement. NetFoundry gates layer 3 access via layer 7 identity, authentication and authorization in a software-only architecture which does not depend on underlying infrastructure. This means you spin up a zero trust MCP enclave in minutes – either self-hosted (including on-prem options) or using NetFoundry NaaS (with E2EE – keys sovereign to your endpoints on a private, dedicated MCP enclave).

                  MCP security: deployment options

                  There are two main ways to deploy:

                  • Greenfield: Use NetFoundry SDKs to embed overlay network endpoints in the MCP client and/or the MCP server. They then talk across a zero trust NetFoundry overlay network (provided by NetFoundry as NaaS, or self-hosted, including in air-gapped or on-premises environments). This is an agentless approach to MCP security – the zero trust overlay goes wherever the MCP clients and servers go. This is ideal for greenfield MCP deployments, especially ones using popular Golang-based MCP implementations such as LangGraph, Litmus IO, and Slim.AI
                  • Brownfield: Use NetFoundry zero trust agents. These can go on clients and servers – as agents, containers or VMs. They are available for OT, IoT and IT devices and servers, including every major OS. They can also be deployed as gateways – for example, at a site, in a DMZ or on a cloud edge (they are in every major cloud marketplace). NetFoundry agents are also prebuilt in many browsers, proxies, firewalls and reverse proxies. Think of the NetFoundry agents as VPN clients or SD- WAN CPE…without their baggage…and with full zero trust capabilities and an integrated global zero trust overlay network (NetFoundry NaaS or self-hosted, including in air-gapped sites).

                  The above is a simplification – there are many hybrid options – as well as ways to start with zero trust on one ‘side’ (e.g. make the MCP server unreachable), while using different methods on the other side (e.g. TLS, mTLS), or at least starting there. Basically, spin up a zero trust enclave for your MCP servers and/or agents, using whatever method best suits your needs.

                  AI MCP security: 4 NetFoundry healthcare examples – agentless zero trust option

                  NetFoundry SDK-based implementation options can be extremely powerful. Use these if you have access to the application or its developers and want a private AI enclave without agents or gateways. Otherwise, skip to the 5 agent-based examples in the next section which do no require application changes.

                  Applications written in Golang are particularly simple and powerful, and LangGraph, Litmus IO and Slim.AI are three very popular MCP solutions.

                  Therefore, we use those as examples in the context of solving real-world healthcare challenges with a strong focus on simplicity, security, HIPAA compliance, reliability and performance, ensuring Patient Health Information (PHI) is protected, and MCP servers are unreachable from the Internet or any underlay network.


                  1. Secure Remote Patient Monitoring (RPM) for Chronic Disease Management

                  This example focuses on leveraging AI and MCP to securely collecting data from patients at home.

                  Scenario: A patient with congestive heart failure is sent home with a “smart” scale, a blood pressure cuff, and a pulse oximeter. These devices connect to a small gateway device in their home. A hospital needs to collect this data daily to monitor the patient’s condition and prevent readmission.

                  Solution using Slim.AI and NetFoundry:

                  • Slim.AI: The software on the patient’s home gateway device is critical. It’s running in a container that is first minified and hardened using Slim.AI. This drastically reduces its attack surface, making it much safer to have an Internet-connected device in a patient’s home, especially when NetFoundry works with it to make the device unreachable from the Internet. It also shrinks the software size, making remote updates easier and more reliable.
                  • NetFoundry Go SDK: The Go application on the gateway collects readings from the medical devices. It uses the Go SDK to dial a NetFoundry overlay service (e.g. rpm-data-ingest) to transmit the encrypted PHI. The hospital’s Electronic Health Record (EHR) integration server binds to this overlay service, accepting data only from authenticated gateways. The NetFoundry overlay, operated as NaaS or by the hospital, provides security, reliability, performance, controls and telemetry. For example, the access can be done as Just-in-Time (JIT). The overlay is a private enclave for this data.

                  Result: Patient data is transmitted securely from the home to the hospital, fully supporting HIPAA compliance. The gateway device itself is hardened, reducing the risk of it being compromised and used as an entry point to attack the hospital’s network.

                  2. Real-Time IoMT Data Aggregation for an ICU Dashboard

                  This example uses an IoMT platform to create a unified view of patient data within the most critical part of a hospital.

                  Scenario: In a hospital’s Intensive Care Unit (ICU), each bed is surrounded by multiple devices from different manufacturers: ventilators, infusion pumps, and vital sign monitors. Nurses and doctors need a single, real-time dashboard to see a holistic view of every patient, rather than checking individual device screens.

                  Solution using Litmus IO and NetFoundry:

                  • Litmus IO: Litmus Edge is installed on a gateway server within the ICU’s network segment. It connects to all the medical devices (via HL7, serial, or other protocols), collecting, parsing, and normalizing the raw data into a standard FHIR (Fast Healthcare Interoperability Resources) format.
                  • NetFoundry Go SDK: A Go-based “FHIR Forwarder” application reads the standardized data streams from Litmus Edge. It uses the Go SDK to establish a secure, outbound-only connection to the hospital’s central clinical dashboard and data lake, which are listening on a private NetFoundry overlay service. In this case, the NetFoundry overlay network is deployed on-premises, and air-gapped.

                  Result: The ICU gets a real-time, comprehensive patient dashboard. The data stream is incredibly sensitive, and NetFoundry ensures it is completely isolated from the main hospital network and the Internet, preventing snooping or data tampering. Litmus simplifies the massive challenge of device interoperability at the edge.

                  3. AI-Powered Diagnostic Assistant for Radiologists

                  This example uses an LLM agent to help physicians interpret complex medical images and patient histories more efficiently.

                  Scenario: A radiologist is reviewing a new CT scan for a patient with a long and complex medical history. To make an accurate diagnosis, they need to quickly understand the patient’s prior conditions, lab results, and genetic markers, and compare the current scan to previous ones.

                  Solution using LangGraph and NetFoundry

                  • LangGraph: The hospital deploys a secure, internal “Radiology Copilot” service built with LangGraph. When the radiologist opens a new scan, this agent automatically queries the EHR for the patient’s full history; accesses the Picture Archiving and Communication System (PACS) to pull prior relevant images; queries a genomic database for relevant markers mentioned in the patient’s file; feeds this context into a specialized medical LLM to generate a concise summary of “key things to look for.”
                  • NetFoundry SDK: The radiologist’s workstation and the LangGraph server communicate exclusively over a NetFoundry overlay network. The physician’s request and the resulting AI-generated summary (containing PHI) are never exposed to the internet. All backend queries made by the LangGraph agent to the EHR and PACS also travel over discrete, zero-trust NetFoundry services. The NetFoundry overlay can be NaaS or on-prem in this example, depending if the radiologists are on-prem.

                  Result: The radiologist can make faster, more informed decisions. LangGraph automates the laborious data gathering process, while NetFoundry provides the critical, HIPAA- compliant security fabric that allows these sensitive AI-driven interactions to happen safely.

                  4. Secure and Auditable Prescription Fulfillment Workflow

                  This example focuses on securing the communication between a physician, a pharmacy, and an insurance provider.

                  Scenario: A doctor prescribes a specialized, high-cost medication. This action needs to be securely sent to the patient’s chosen pharmacy, and a pre-authorization request must be sent to the insurance provider. This workflow is highly sensitive and a target for fraud.

                  Solution using Slim.AI and NetFoundry:

                  • Slim.AI: The microservices that handle e-prescribing and insurance claims processing are each packaged in containers hardened by Slim.AI. This reduces vulnerabilities in the applications that process some of the most sensitive patient and financial data.
                  • NetFoundry Go SDK: The entire workflow is built on NetFoundry services. The doctor’s EHR client uses the Go SDK to send the prescription to an e-prescribe hub service. The hub forwards it to the specific pharmacy’s endpoint, also a private NetFoundry service. Simultaneously, the hub sends an authorization request to the insurance provider’s pre-auth-gateway service. The NetFoundry overlay is provided as NaaS in this example.

                  Result: The prescription workflow is verifiably secure and private end-to-end. There are no public-facing APIs to attack. This zero-trust architecture prevents prescription fraud, protects patient data, and creates a clear, auditable trail of communication between authenticated parties only.

                  MCP security: 5 NetFoundry healthcare examples – agent based zero trust

                  The above 4 examples are unbelievably powerful because they embed the zero trust network in the actual application. However, sometimes we don’t have control of the application code, or it is simpler to front-end the application with NetFoundry agents or gateways, on the client and/or server side.

                  Therefore, here are some agent based examples of solving real-world healthcare challenges with a strong focus on simplicity, security, HIPAA compliance, reliability and performance, ensuring Patient Health Information (PHI) is protected and MCP servers are unreachable from the Internet or any underlay network.

                  1. AI-Assisted Radiology in a Multi-Hospital Network

                  Scenario: A regional hospital network uses an AI-powered diagnostic engine (LLM + computer vision) hosted in a private Azure VNet. Radiology devices in each hospital (MCP clients) need to send imaging data securely to the MCP server running the inference engine.

                  Solution:

                  • Each imaging system connects outbound-only via NetFoundry agents in Azure and the hospital.
                  • No inbound ports exposed on the central AI inference server.
                  • Policies restrict access to authorized device identities (e.g., radiology-client-east) and NetFoundry provides visibility of what identity accessed each service.
                  • Meets HIPAA/ISO 27001 segmentation and audit standards.

                  2. Secure Telepathology AI Between Rural Clinics and Urban Labs

                  Scenario: Pathology samples are digitized at rural clinics. A remote AI pathologist (LLM + deep learning model) hosted in a central university lab reviews and annotates them.

                  Solution:

                  • The digitization station is the MCP client.
                  • The lab’s review agent is the MCP server.
                  • NetFoundry agent or gateway ensures all traffic is identity-authenticated, encrypted, and never exposed to the public Internet.

                  Value:

                  • No VPNs or firewall holes needed in rural clinic networks.
                  • Enables scaling to dozens of clinics with centralized management. All access can be just in time (JIT), one-time or persistent.
                  • Granular, identity based telemetry.

                  3. AI Copilot for Medical Coding (CDI/RCM) in Hybrid Workforce

                  Scenario: A health system deploys a GenAI assistant to help clinical documentation improvement (CDI) specialists and revenue cycle (RCM) analysts review medical notes for better coding accuracy. Analysts work from home and must connect to a model inference engine inside a secure EHR-integrated data center.

                  Solution:

                  • MCP clients run on analysts’ local desktops along with NetFoundry agents. By policy, NetFoundry agents only touch the MCP sessions so that the other analyst work is not impacted.
                  • All MCP requests go outbound through NetFoundry, reaching the centralized inference server over mTLS with E2EE.
                  • Fine-grained access ensures only active sessions with proper credentials are allowed, reducing PHI exposure risks.

                  4. AI Decision Support in Operating Rooms

                  Scenario: OR equipment vendors integrate AI-based decision support tools (e.g., fluid monitoring, anesthesia dosing) that interact with centralized hospital systems for real-time recommendations.

                  Solution:

                  • The OR tool is the MCP client.
                  • The AI backend server is hosted on-site (e.g., Medtronic cloud or health system-owned HPC cluster).
                  • NetFoundry tunnelers protect both ends, removing the need for VPNs or exposed firewall ports in the OR subnet.

                  5. Secure AI-Powered Patient Engagement from EHR Portals

                  Scenario: A GenAI chatbot answers questions about medications, lab results, and discharge summaries for patients via their portal.

                  Solution:

                  • The EHR portal backend is the MCP client (it calls the LLM engine).
                  • The LLM engine is hosted in a non-EHR-connected cloud (e.g., Azure OpenAI in a private subnet).
                  • NetFoundry agent or gateway ensures that no one except the authenticated portal server can talk to the LLM.
                  • No inbound ports open in cloud LLM environment.
                  • All access tied to patient session tokens and hospital policies.

                  The post AI in Healthcare appeared first on NetFoundry.

                  ]]>