apicontext.com https://apicontext.com You can't fix what you can't see. Fri, 20 Mar 2026 23:31:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://apicontext.com/wp-content/uploads/2022/08/APIContext-favicon-1-150x150.png apicontext.com https://apicontext.com 32 32 Securing the Signal: APIContext Now Supports mTLS for OpenTelemetry and Webhook Alerts https://apicontext.com/securing-the-signal-apicontext-now-supports-mtls-for-opentelemetry-and-webhook-alerts/ Fri, 20 Mar 2026 23:24:29 +0000 https://apicontext.com/?p=43296 APIContext expands alert security with mTLS certificates.]]>

When an alert fires, it needs to reach the right system fast, reliably, and without question as to its authenticity. Operational teams aren’t just dealing with noisy dashboards; they’re managing alerts flowing in from a growing ecosystem of synthetic monitors, APM tools, log aggregators, and custom pipelines. In that environment, the provenance of a signal matters as much as its content.

Today, APIContext is adding mutual TLS (mTLS) certificate support to both its OpenTelemetry alert delivery and its Webhook alert delivery. For SRE and DevOps teams, this closes a meaningful security gap in the observability pipeline.

What mTLS Actually Means for Alert Delivery

Standard TLS is the baseline for modern HTTPS traffic. It encrypts the connection and lets the client verify the server’s identity via its certificate. But with standard TLS, the server can’t verify who is calling it. For most web traffic, that’s fine. For operational signals arriving at your alerting infrastructure, it’s a meaningful blind spot.

Mutual TLS (mTLS) closes the gap by requiring both parties to authenticate. The server presents a certificate to the client, and the client presents a certificate back. The connection is only established if both certificates are valid and signed by a trusted Certificate Authority. In practice, this means that your alert receiver can cryptographically verify that an incoming alert payload originated from APIContext, and not from a misconfigured pipeline, a network-level replay attack, or a spoofed source.

The authentication mechanism is enforced at the transport layer, before any application-level logic runs. This operates independently of application-layer authentication like OAuth 2.0 tokens, which are scoped to the session rather than cryptographically bound to the system sending the request.

For teams operating under zero-trust principles, mTLS is the right framework. It’s the same mechanism that secures service-to-service communication in Kubernetes clusters, and it’s increasingly expected across the observability toolchain as well.

mTLS Adoption Across the Observability Stack

mTLS has been a first-class citizen in service mesh architectures for years. Istio, Linkerd, and Consul Connect all use it by default for east-west traffic between microservices. Its extension into the observability layer is a natural progression, particularly as OTel has matured.

The OpenTelemetry Collector’s configtls module has supported mTLS configuration on both receivers and exporters for some time. In receiver mode, the Collector can require incoming telemetry sources to present a client certificate — validating that data is arriving from a known and trusted instrumented service. In exporter mode, the Collector presents its own certificate when forwarding telemetry to a backend, ensuring it is recognized as an authorized data source rather than an arbitrary forwarder. The OTel community has progressively formalized these patterns, with the CNCF documenting production deployments that expose OTel Collectors via the Kubernetes Gateway API with mTLS enforced end-to-end.

The trajectory is clear: mTLS is becoming expected infrastructure, not a security-team opt-in.

Where adoption has lagged is on the alerting output side — specifically, when monitoring tools push signals outbound to consuming systems. That’s exactly the gap APIContext is addressing.

APIContext now supports mTLS on both sides — the monitoring and the alerting — making the entire signal chain auditable and cryptographically verifiable.

What This Means in Practice

Configuring mTLS for your APIContext alerts requires a client certificate and private key, signed by a CA that your consuming system trusts. Once configured, APIContext presents these credentials on every alert delivery, whether that delivery is going to an OTel collector endpoint or a custom webhook receiver.

For SRE teams, the operational impact is concrete:

Zero-trust alert consumers are now viable. If your internal alerting infrastructure enforces mTLS on all inbound connections, you no longer need to carve out an exception for APIContext traffic or rely on IP allowlisting as a compensating control. APIContext presents a certificate, your system verifies it, the connection succeeds.

Alert provenance is cryptographically verifiable. mTLS verifies identity at the transport layer using PKI — the same infrastructure you likely already use for service-to-service authentication in your cluster.

Compliance posture improves. Regulated industries increasingly require mutual authentication for data-in-transit. Having a synthetic monitoring vendor that can satisfy this requirement without a security exception simplifies audit conversations.

Certificate lifecycle becomes the management surface. This does introduce an operational responsibility: certificates expire, and you’ll want to integrate APIContext certificate rotation into your existing cert management workflow (cert-manager, Vault PKI, or your CA of choice). Treat the APIContext client certificate like any other service identity credential in your environment.

Closing the Loop on Observability Security

Observability tools occupy a privileged position in your infrastructure. They have broad read access to service health, they receive detailed error and latency signals, and they push operational decisions to the teams responsible for keeping your systems running. That privilege makes them an attractive target for spoofing, and it makes the integrity of their signals operationally critical.

Adding mTLS to OTel and webhook alert delivery is part of a broader commitment to treating the alerting pipeline as first-class infrastructure — not just a convenience layer for getting messages to Slack, but a security boundary with the same rigor you’d apply to any service-to-service communication in a production environment.

If you’re already running mTLS-secured alert consumers, you can configure your APIContext client certificates today in the Alerts settings. If you’re planning a zero-trust rollout of your observability stack, this gives you one less exception to manage.

]]>
Operational Resilience in Practice: Interpreting External Signals During Digital Disruption https://apicontext.com/operational-resilience-in-practice-interpreting-external-signals-during-digital-disruption/ Thu, 05 Feb 2026 19:27:54 +0000 https://apicontext.com/?p=43272 We are excited to welcome Lelah Manz to the APIContext team as our new Bord Chair.]]>

In 2025, digital disruption stopped being an abstract risk and became an operational reality.

Across finance, healthcare, transport, government, and energy, organisations experienced incidents that did not originate inside their own environments, but still had real, immediate impact on services, customers, and regulatory obligations. These events exposed a structural challenge in how resilience is measured and managed today.

At the UK Public Sector Cyber Security Conference, we explored what operational resilience looks like in practice when failures are upstream, intermittent, and difficult to classify, and why external verification is becoming a critical capability.

A Shared Digital Backbone, Shared Risk

Most critical industries now rely on the same digital delivery chain. Cloud platforms, DNS providers, identity systems, CDNs, and third-party SaaS services form a shared backbone that underpins national infrastructure and enterprise systems alike.

This has created a situation where failures cascade across sectors. A single misconfiguration or control plane issue can impact retail, transport, financial services, and government systems simultaneously.

The internet was originally designed to be decentralised and resilient. Today, it is increasingly centralised by convenience. While redundancy still exists at the infrastructure layer, control planes have consolidated, creating new single points of failure that are hard to observe from inside an organisation.

The Dependency Circle and the Monitoring Gap

Most organisations monitor what they own. Far fewer monitor what they depend on.

Internal dashboards often remain green during external failures. Status pages may rely on the same infrastructure that is degraded. Control plane issues frequently bypass internal instrumentation altogether.

In 2025, more than half of major digital incidents originated upstream. During these events, teams were repeatedly forced to answer the same high-stakes question: is this an attack, an internal incident, a systemic issue, or a third-party failure?

Getting that classification wrong has consequences. It affects escalation paths, incident response, communications, and regulatory reporting. Misclassification increases both operational and compliance risk.

What 2025 Taught Us About Failure Modes

Looking across multiple large-scale incidents in 2025, clear patterns emerged.

Several events were triggered by latent configuration defects that had been introduced weeks earlier and only surfaced when an unrelated change propagated through the network. Others were regional edge failures that appeared selective internally but were clearly systemic when viewed externally.

In multiple cases, we observed:

  • Elevated DNS and TLS handshake latency before availability dropped
  • Immediate TCP resets rather than clean timeouts
  • High variance across regions and edge locations
  • Partial traffic serving, where some users succeeded while others consistently failed

These were not clean outages. They were ambiguous, uneven, and difficult to reason about without an independent external view.

One of the most challenging scenarios involved partial availability. When some nodes continue to serve traffic and others do not, internal metrics can mask the real customer experience. From a regulatory perspective, this creates blind spots around service availability and impact assessment.

Why External Verification Changes the Equation

Resilience is the goal. Verification enables it.

An external, outside-in signal provides a different lens. It shows how services behave in the real world, from multiple locations, independent of internal tooling and assumptions. It complements the SOC, NOC, and existing observability stacks rather than replacing them.

External verification helps teams:

  • Distinguish between internal and upstream failures
  • Detect correlated degradation across regions and providers
  • Reduce time to innocence during incidents
  • Make faster, more confident operational decisions

Most importantly, it removes guesswork during widespread disruption, when assumptions are most likely to fail.

From Observability to Operational Response

Another lesson from 2025 is that telemetry alone is not enough.

As data volumes grow, the bottleneck shifts from collection to action. Teams need help interpreting signals and responding quickly, especially when responsibility spans multiple suppliers.

This is why operational models matter. Managed response, triage, and routing turn raw telemetry into outcomes. When external signals are combined with operational expertise, organisations can respond faster without scaling internal headcount or complexity.

Preparing for Mandated Resilience

Looking ahead, regulatory expectations are increasing.

The upcoming resilience legislation introduces mandatory incident transparency, significant penalties for non-compliance, and an expanded scope that includes managed service providers and data centres. Resilience is no longer optional, and it is no longer confined to what sits inside the enterprise perimeter.

Verification must span internal systems, external dependencies, and the wider supply chain. Organisations will be expected to demonstrate not just that they monitor systems, but that they can validate service behaviour independently during disruption.

The Role of APIContext

APIContext exists to provide that verification layer.

By measuring how services behave externally, across clouds, networks, and partners, APIContext helps organisations understand what is actually happening when complexity surfaces. It provides clarity during disruption and confidence during calm.

As the internet evolves toward more autonomous, machine-driven workflows, this need will only grow. Systems will execute faster, dependencies will deepen, and tolerance for ambiguity will shrink.

Operational resilience in this environment depends on one thing above all else: knowing, with confidence, whether your services work when it matters most.

]]>
APIContext in 2025: Building for an Autonomous Internet https://apicontext.com/apicontext-in-2025-building-for-an-autonomous-internet/ Tue, 23 Dec 2025 21:06:23 +0000 https://apicontext.com/?p=43258 We are excited to welcome Lelah Manz to the APIContext team as our new Bord Chair.]]>

2025 was a pivotal year for APIContext because the internet started working in a fundamentally different way.

Usage of digital services is no longer primarily from human interactions. Increasingly, applications are executed by software acting on behalf of users, businesses, and systems. Workflows now span APIs, web interfaces, and distributed compute nodes, operating continuously and at machine speed. We call this the autonomous internet.

The autonomous internet does not replace existing use cases…it compounds them. Every digital interaction must now support explosive growth in machine-initiated traffic while continuing to deliver the reliability users already expect.

Our focus in the past year was to build for that new reality before it fully arrives.

From Endpoints to Distributed Execution

Traditional monitoring still assumes that services are experienced at the edge of an application, through a single interface or endpoint. But modern digital journeys are executed across distributed systems, often far from the user and outside the direct control of the enterprise.

What matters is not theoretical network behaviour or abstract metrics. Application owners care about whether a distributed workload is executed quickly and correctly, wherever it runs.

APIContext is built around this reality. We model how services are actually delivered today, across clouds, networks, and third-party infrastructure, and validate outcomes rather than isolated components.

Expanding Visibility Where Workloads Run

In 2025, our roadmap prioritised expanding network visibility as a first-order capability. As enterprises depend on a growing set of compute partners, DNS providers, and CDN platforms, understanding where execution happens and how it behaves has become essential.

This led us to deepen geographic and network context, integrate additional signals from CDN and infrastructure partners, and help customers distinguish between issues they own and issues that originate elsewhere in the delivery chain.

The goal was not more data, but clearer answers.

Turning Telemetry into Action

As telemetry volumes increase, the limiting factor is no longer collection, but response. Many organisations struggle with what to do once an issue is detected, especially when responsibility spans multiple suppliers.

Our expanded relationship with Akamai addressed this directly. By embedding APIContext as a signal layer inside a managed service, telemetry becomes operational by default. Issues are not just observed, they are triaged and acted on without adding burden to internal teams.

This partnership also drove a major shift in how we architect our own platform. By decoupling the data pipeline from the core product, we created far greater flexibility in how telemetry can be accessed, analysed, and applied across use cases.

Nodes as the Unit of Reliability

As compute becomes more distributed, the application is only working well if every node that delivers the workload is working well. Whether that node runs in a public cloud, at the edge, or inside a private enterprise environment, it must behave consistently.

In response, we made it significantly easier for customers to deploy private monitoring nodes, reflecting the reality that so much autonomous computing happens behind the firewall. These environments are where organisations are experimenting, learning, and preparing for broader adoption.

Reliability must follow the workload, not the perimeter.

Showing How Autonomous Systems Experience the Internet

Machines experience the internet differently from humans. They do not adapt to degraded performance or partial failure. A workflow either completes or it does not.

That understanding shaped our expansion into machine-driven browsing and MCP interaction. By validating how autonomous systems traverse websites, APIs, and compute nodes, APIContext can assess whether complex, multi-modal workflows behave as intended.

As websites focus on adding AEO to their SEO strategies, and agentic AI operations move from experiments to production-grade, we are already identifying the weak links in those delivery chains.

What’s Next

As digital services become more autonomous, understanding where work is executed becomes just as important as understanding what failed.

Today, APIContext operates across all major cloud environments, including Google Cloud, Azure, AWS, Akamai, Linode, and emerging providers. This breadth is intentional. It reflects how modern services are actually delivered, across multiple compute providers, regions, and execution models, often within the same workflow.

What comes next is expanding that perspective even further.

We are rolling out support for additional cloud environments, with Alibaba Cloud up next, to help customers operate confidently across global and regional infrastructure. As enterprises expand into new markets and adopt more distributed architectures, they need a consistent way to answer a simple but critical question: when something degrades, is the issue internal, third-party, or infrastructure-related?

By continuing to broaden where APIContext runs, we make that distinction clearer. The goal is not just coverage, but clarity, giving teams the ability to reason about distributed compute the way it actually behaves, regardless of provider.

This expansion reinforces our core belief: reliability in an autonomous internet depends on understanding execution across nodes, not just monitoring individual components.

Thank you

Thank you for trusting us to run over a billion tests a year on your behalf. By doing so, you’ve helped us stay relentlessly outcome-focused, reducing time to innocence, accelerating time to resolution, and ensuring you see real ROI from your testing investment.

2025 was a year of laying foundations and making deliberate bets on where software development and distributed compute are heading. We’re ready for 2026.

Happy holidays from the APIContext team. We’ll keep validating that your services are working, and hope you get the chance to log off for a bit.

]]>
Resilience Monitoring for the Agentic AI Era: MCP Monitoring https://apicontext.com/resilience-monitoring-for-the-agentic-ai-era-mcp-monitoring/ Thu, 20 Nov 2025 00:41:23 +0000 https://apicontext.com/?p=43089 Autonomous agents are already probing your endpoints. Are your systems built to handle them?]]>

If you’re responsible for keeping production up, agentic AI probably terrifies you a little.

Over the past year, we’ve watched “let’s hack a quick agentic demo” quietly turn into “this thing is now in the critical path for customers.” And now there’s a new piece of infrastructure sitting right in the middle of it all: MCP servers.

Today, we’re announcing MCP Monitoring in the APIContext platform. This new capability gives SRE and platform teams real visibility into how AI agents are actually talking to tools over MCP, and whether those workflows can meet real-world performance budgets.

From “cool demo” to “critical path” in under a year

Model Context Protocol (MCP) launched in November 2024, and it’s already becoming the default way to wire AI agents into APIs, databases, and SaaS apps through a unified interface.

The pattern we’re seeing with customers is pretty consistent:

  • The more they offload to MCP, the better the agent experience gets. LLMs are fantastic at reasoning; they are terrible at “remembering” how to call an internal billing API or a finicky third-party system. If you don’t put that logic in MCP, the model will happily hallucinate an action path and fail in ways that are almost impossible to debug later.

  • MCP is quickly turning into critical infrastructure. It becomes the broker between “AI that wants to act” and “systems that must not break.” That’s a dangerous place to be flying blind.

  • Teams are being forced to run their own MCP servers. If you care about API governance, rate limits, security controls, and data residency, you can’t just let a random public MCP endpoint talk to your APIs and hope for the best. You end up hosting your own MCP servers so you can mediate access – and that’s new overhead SRE and platform teams did not have on their roadmaps 90 days ago.

And yet, the operational tooling around all of this is still in the “we’ll figure it out later” stage. That’s not acceptable in 2026.

The new blind spot: the compute chain you don’t control

AI workflows now depend on a distributed compute chain that crosses multiple vendors, clouds, and protocols – most of which you don’t control.

Today, that chain looks something like:

User → Frontend → LLM provider → MCP server → Auth → Downstream APIs / SaaS apps / services  → their underlying DBs….and all the way back again

Traditional monitoring gives you pieces of this:

  • APM shows you what your services are doing.

  • LLM observability tools show you prompts, tokens, and maybe some function-call logs.

  • API monitoring shows you individual endpoints, sometimes in isolation.

But the MCP server sits right at the center of that chain – coordinating tools, enforcing access, and orchestrating calls – and in most stacks, it’s effectively a black box.

That’s where the nastiest failures hide:

  • Silent timeouts that get swallowed by the agent and turned into “sorry, something went wrong” responses.

  • Latency blowups when a single MCP tool call trips over an auth bottleneck or a slow third-party API.

  • Drift in workflows where changes to tools, schemas, or auth flows break paths the LLM still believes are valid.

These aren’t neat “500 error” failures. They’re the kind that burn weeks of engineering time and destroy trust in the AI system – both for customers and for the teams asked to support it.

What we’re launching: MCP Monitoring

APIContext’s new MCP Server Monitoring puts hard numbers around that entire agent–MCP–tool interaction, so teams can treat MCP as critical infrastructure.

At a high level, it does three things:

1. Performance budgets for agentic workflows

You can’t run a voice agent, chat assistant, or multi-step workflow on vibes. You need to know:

  • How long each MCP interaction takes end-to-end.

  • How that latency breaks down across the MCP server, auth, and downstream APIs.

  • Whether you’re staying inside the performance budgets required to meet your SLAs.

With MCP monitoring, we track latency at the MCP layer and correlate it with downstream dependencies, so you can set and enforce realistic SLOs for agentic workflows.

If your voice AI support system is leaving callers in dead air because it’s waiting on MCP, you’ll see it before customers rage-escalate to a human and your AI ROI evaporates.

2. Root cause, not vibes-based blame

When something is slow or broken, the first question in every incident channel is: “Whose fault is this?”

MCP Monitoring lets you answer:

  • Is the agent itself stalling?

  • Is the MCP server overloaded or misconfigured?

  • Are we blocked on auth, rate limits, or policy checks?

  • Is a downstream API or SaaS tool the real culprit?

We surface where latency and errors originate – agent, MCP, auth, or downstream service – so SREs don’t waste cycles chasing ghosts in the wrong layer.

3. Reliability in production, not just in demos

Early agentic projects often “work fine in staging,” right up until:

  • traffic spikes

  • a vendor changes a schema

  • a new tool is added without proper testing

  • or a subtle auth rule changes behavior for a subset of users

MCP monitoring continuously tracks drift and errors in live workflows so you can catch regressions before they show up as churn or support tickets. It’s a live resilience signal: how your machines are actually experiencing your digital services, not just what the dashboards say should be happening.

Why this matters now

DevOps teams are now on the hook for uptime, latency, and reliability of systems where a large part of the behavior is being decided by an LLM and mediated by MCP.

They didn’t sign up for that responsibility, but it’s happening anyway.

Every time a business moves a workflow from “human in the loop” to “agent in the loop,” the cost of failure goes up and the tolerance for “AI being weird” goes down. When that failure path involves MCP, traditional APM and logs are not enough. You need an explicit observability layer for MCP itself.

If you believe that agentic AI is going to power customer support, operations, and revenue-generating workflows, then the surface area that needs resilience monitoring just expanded – and MCP is right at the center of that expansion.

What to do next

If you’re already running MCP in production (or if you will be soon!), now is the time to get ahead of the operational risk.

  • See how MCP monitoring works in detail: Visit our MCP feature page to explore the capabilities.

  • Talk to us about your AI stack: If you’re piloting or scaling agentic workloads and you’re worried about supporting them in production, we’d love to compare notes and show you what we’re seeing across customers.

Agentic AI is moving out of the lab. The question isn’t whether you’ll monitor MCP. It’s whether you’ll do it before or after your first major AI incident.

]]>
A New Chapter of Acceleration: Welcoming Lelah Manz as APIContext Board Chair https://apicontext.com/a-new-chapter-of-acceleration-welcoming-lelah-manz-as-apicontext-board-chair/ Tue, 14 Oct 2025 11:00:59 +0000 https://apicontext.com/?p=43001 We are excited to welcome Lelah Manz to the APIContext team as our new Bord Chair.]]>

There are moments in a company’s journey that feel less like a decision and more like a natural evolution built on years of shared vision, trust, and hard-earned momentum.

For all of us at APIContext, bringing Lelah Manz on as Chair of our Board is exactly one of those moments.

I’ve known Lelah for a long time, and she has seen it all. Over nearly two decades at Akamai, she helped steer one of the most significant growth transformations in enterprise infrastructure — taking a CDN company and helping evolve it into a multi-billion-dollar leader in security, data, and compute services.

That type of trajectory doesn’t happen by accident. It takes someone who understands how markets grow, can see around corners, and actively designs the systems, the teams, and the partnerships that make that growth possible.

Over the past year, APIContext has deepened its collaboration with Akamai, including the launch of a new managed API performance service powered by our platform. It’s been a powerful signal of where the market is headed: machine-driven traffic has overtaken human traffic. Resilience and conformance are no longer “support functions” — they’re strategic differentiators.

In this new world, APIContext isn’t just a monitoring platform. We’re building a new signal layer for digital infrastructure built for automation, compliance, and machine-scale reliability.

Lelah saw that before most. Her leadership at Akamai shaped how the internet is delivered. Now she’ll help us shape how it’s verified. Our mission has always been clear: to make resilience measurable, enforceable, and actionable at every connection point. With Lelah, we’re positioned to accelerate that vision faster and more confidently than ever.

Bringing in Lelah now adds operating depth and accelerates the entire organization. I couldn’t be more excited to have her alongside us as we scale APIContext.

]]>
Your APIs, Optimized: APIContext Powers New Akamai Managed Service for API Performance https://apicontext.com/your-apis-optimized-apicontext-powers-new-akamai-managed-service-for-api-performance/ Tue, 07 Oct 2025 20:22:55 +0000 https://apicontext.com/?p=42977 Autonomous agents are already probing your endpoints. Are your systems built to handle them?]]>

Your teams are supposed to be shipping code, building features, and driving innovation. But too often they get bogged down in investigations and optimizations because the business and your users demand flawless performance.

Now you can achieve expert-level API performance and resilience without adding any new headcount.

I’m excited that Akamai has launched its Managed Service for API Performance, powered by APIContext’s advanced synthetic monitoring platform.

 

For Teams with No Time to Spare

This new managed service is designed specifically for enterprise application teams who need to guarantee API performance without diverting focus from core development. It allows teams of any size to see real world performance gains without spending time to track down and implement performance enhancements.

Akamai’s service essentially acts as an extension of your team, leveraging our technology to deliver:

  • Reduced Operational Load: Offload the infrastructure and day-to-day monitoring tasks, freeing your team to focus on innovation and speed.

  • Proactive Detection: Akamai experts use our platform’s advanced synthetic monitoring to detect, notify, and mitigate latency and resilience issues before they impact your users.

  • Faster Incident Response: You get the benefit of 24/7 expert oversight. Akamai’s experts triage every alert, providing actionable intelligence that reduces the mean time to identify and remediate issues.

     

The Technology That Makes It Possible

We are incredibly proud that Akamai chose APIContext as the engine for this critical managed service. It’s a powerful validation of our best-in-class technology.

At its core, the service uses our advanced synthetic monitoring capabilities to provide continuous, end-to-end testing of APIs with global coverage. This allows Akamai’s team to uncover subtle trends—like rising response times, region-specific slowdowns, or schema mismatches—and provide tailored action plans to drive continuous optimization

It can even handle operations in regulated industries, to demonstrate stringent SLA and compliance requirements; and simplify audit readiness for regulatory mandates.

A Partnership Built on Performance

This collaboration represents a natural synergy: APIContext’s industry-leading API monitoring technology combined with Akamai’s global scale and world-class managed services. The result is a seamless solution that allows any enterprise to ensure their APIs are available, fast, and compliant —without the operational overhead.

 

Stop reacting to API problems and start proactively assuring performance.

To learn more about how your team can benefit, visit the Akamai Managed Service for API Performance overview page.

]]>
Enterprises Aren’t Rebuilding for AI — They’re Fronting APIs https://apicontext.com/enterprises-arent-rebuilding-for-ai-they-are-fronting-apis/ Thu, 24 Jul 2025 00:36:29 +0000 https://apicontext.com/?p=42942 Why the Model Context Protocol (MCP) is a pragmatic bridge, not a moonshot rewrite.]]>

Why the Model Context Protocol (MCP) is a pragmatic bridge, not a moonshot rewrite

We’re at that point in the AI hype cycle where two very different product strategies are getting lumped together: one aims to empower users to do more with AI, the other is about making the product smarter. Both are valid, but they’re not the same – and they lead to very different decisions about infrastructure.

For most enterprises, the smartest short-term strategy isn’t to rebuild. It’s to extend. That means fronting existing API investments – not rewriting them. It means deploying MCP servers that call existing APIs as-is, adding just enough context and tooling to make them usable by LLMs, agentic runtimes, and next-gen orchestration frameworks.

Don’t start with a new backend. Start with a proxy. on existing services

That’s what MCP unlocks. Not a new protocol to replace your APIs – but a contextual layer that helps external agents understand what your APIs do, how they should be used, and where governance boundaries sit. It’s schema-aware, policy-aware, and sits outside your core estate.

From what we’re seeing, this is how most enterprises will roll out “AI infrastructure”: with lightweight, controlled MCP servers that expose just enough surface area to test real use cases – all without rewriting anything. Think of it like a safe staging ground for machine-first interaction.

That has real implications to the AI infrastructure stack.

  1. Monitoring shifts outward — we’re now being asked to monitor not just the APIs, but the MCP server and its tooling interface.
  2. Budgets are forming — MCP tools will be scoped, defined, and funded as standalone line items.
  3. API sprawl gets messier — a proxy on a proxy on a legacy endpoint isn’t neat, but it’s what works today.

And most importantly: observability needs to shift left of the LLM.

We’re already testing this with customers. Monitoring MCP servers, validating tool interfaces, and mapping performance across the full flow: from the client, to the MCP, to the target API. If that sounds messy, that’s because it is. But it’s also where we see the next 12–18 months of enterprise AI adoption going.

There’s no AI without APIs. And now, there’s no enterprise AI without context.


Get the White Paper

Enterprise API Readiness in the Era of Agentic AI offers a comprehensive playbook to:

  • Recognize and mitigate the risks agentic clients introduce
  • Build future-facing infrastructure that supports scalable AI use
  • Ensure your APIs remain secure, governable, and easy to consume by autonomous systems

This is a transformation in how digital systems interact—and your APIs are at the center. 

So how do you prepare your APIs for this new framework? Download our white paper on Enterprise API Readiness in the Era of Agentic AI now!


]]>
Your APIs Aren’t Ready for Agentic AI — And That’s a Problem https://apicontext.com/your-apis-arent-ready-for-agentic-ai-and-thats-a-problem/ Tue, 10 Jun 2025 07:08:39 +0000 https://apicontext.com/?p=42927 Autonomous agents are already probing your endpoints. Are your systems built to handle them?]]>

Autonomous agents are already probing your endpoints. Are your systems built to handle them?

As generative AI evolves from simple chat interfaces into autonomous systems capable of decision-making and orchestration, enterprise APIs are entering a new phase of exposure. The way these agents consume APIs is fundamentally different—and many of the parameters of current API design, governance, and monitoring no longer hold.

To help enterprise teams prepare, we are sharing a new white paper: 
Enterprise API Readiness in the Era of Agentic AI.
It’s a practical guide for platform owners who need to ensure their APIs remain secure, scalable, and intelligible in the age of AI-driven automation.

Why Traditional API Strategies Break Down

Agentic AI systems have novel usage patterns that challenge every layer of your API infrastructure. They chain calls in rapid succession, operate continuously without user oversight, and interact with your systems based solely on what your documentation and schemas describe…often with no fallback logic when things go wrong.

That means subtle issues like spec drift, inconsistent error handling, or overly tight rate limits don’t just degrade performance, they introduce systemic failures. Unlike human developers, these agents won’t ask for help or adapt gracefully. If your APIs aren’t designed to handle this kind of consumption, the result isn’t slower adoption – it’s being bypassed entirely.

A Playbook for API Readiness

In the report, we explore the architectural and operational shifts needed to support this new wave of automation. We examine why most enterprise APIs struggle when faced with AI-driven consumption. This includes a look at the compounding risks of outdated specifications, rigid authentication flows, and brittle concurrency limits.

From there, we outline a set of practices for “agent-aware” API design. These include rethinking rate limiting policies to account for parallel requests and continuous polling, and building more expressive, up-to-date OpenAPI specification definitions that help agents parse expected behaviors accurately. You’ll also learn how to adapt your observability tooling to detect and respond to AI-specific usage patterns, which are often far less predictable than human behavior.

We also walk through how the emerging Model Context Protocol (MCP) standard can help API owners maintain control and enforce policy without compromising scale or security. This is especially relevant for organizations anticipating a high volume of AI-driven integrations, whether through internal automation or third-party applications.

And we provide a readiness checklist to help teams evaluate their current state. It covers specification hygiene, identity and access control, runtime safety mechanisms, and guidance for future-proofing APIs for continuous, autonomous consumption.

The Agents are Coming!

The rise of AI agents isn’t theoretical. Early-stage systems are already live in production environments, and the pressure they place on backend APIs is real. Whether you’re dealing with internal orchestrators or external LLM-based integrations, these clients introduce new expectations for clarity, resilience, and rate tolerance.
If your APIs can’t meet these expectations, they may become bottlenecks—or worse, points of failure—in automated workflows. That’s not just a technical problem. It’s a strategic one.

Get the White Paper

Enterprise API Readiness in the Era of Agentic AI offers a comprehensive playbook to:

  • Recognize and mitigate the risks agentic clients introduce
  • Build future-facing infrastructure that supports scalable AI use
  • Ensure your APIs remain secure, governable, and easy to consume by autonomous systems

This is a transformation in how digital systems interact—and your APIs are at the center.
Download the full white paper now

]]>
Navigating the Future of APIs: A Conversation on Drift, Documentation, and Open Standards https://apicontext.com/navigating-the-future-of-apis-a-conversation-on-drift-documentation-and-open-standards/ Tue, 14 Jan 2025 20:43:09 +0000 https://apicontext.com/?p=42847 I recently joined the Tapas & Pretzels Podcast from adorsys to discuss the critical challenges and opportunities shaping API ecosystems today. It was a thought-provoking conversation with industry experts, and I’d love for you to check it out here. In this post, I’ll dive into some of the topics we unpacked—the challenges, the breakthroughs, and […]]]>

I recently joined the Tapas & Pretzels Podcast from adorsys to discuss the critical challenges and opportunities shaping API ecosystems today. It was a thought-provoking conversation with industry experts, and I’d love for you to check it out here.

In this post, I’ll dive into some of the topics we unpacked—the challenges, the breakthroughs, and the opportunities that make APIs a fascinating subject. Whether you’re grappling with API drift or rethinking your governance strategy, hopefully these insights will help you navigate the complex API landscape in 2025 and beyond.

Here’s what we discussed:

1. Tackling API Drift

API drift occurs when implementations diverge from their original specifications over time, creating issues like broken integrations, technical debt, and reduced trust. One of the podcast participants asked a compelling question: “How can organizations detect API drift early enough to prevent downstream impacts?” This sparked a discussion on proactive strategies like leveraging automated testing and monitoring tools. As APIs grow more complex, managing drift becomes essential to maintaining reliability and fostering long-term developer satisfaction. It’s worth noting that some of the new banking regulations add a regulatory component to this topic too.

2. The Documentation Dilemma

Keeping API documentation accurate and up-to-date remains one of the biggest challenges for organizations. During the podcast, I shared a personal anecdote about a developer I met at a conference who had spent an entire week debugging an issue caused by outdated API documentation. That frustration could have been avoided with better practices, like integrating documentation updates into CI/CD pipelines or using tools for automated updates. In 2025, with developers demanding seamless experiences, documentation quality is no longer optional—it’s a competitive advantage, and, see point (1) – it can also be a regulatory requirement.

3. The Case for Strong API Governance

Striking the right balance between governance and innovation was another key topic. Governance frameworks ensure consistency, security, and compliance, but overly rigid systems can stifle creativity and generate friction between engineering and product teams. We shared examples of lightweight governance models that provide teams with tools and templates to self-govern while adhering to organizational policies. In today’s fast-paced environment, flexible governance is the key to scaling API ecosystems effectively.

4. Embracing Open Standards

Open standards like OpenAPI, OAuth, and OpenTelemetry are reshaping how APIs are built, consumed and monitored. During the podcast, I reflected on my experience leading teams that struggled with proprietary systems in the past, where a lack of standards created bottlenecks and stifled innovation. These standards drive interoperability and enable developers to collaborate more effectively while avoiding vendor lock-in—a lesson I’ve seen play out time and again in my career. In 2025, open standards are more than a best practice; they’re a necessity for organizations aiming to stay competitive in an interconnected world.

5. The Role of AI in API Management

AI’s potential to revolutionize API management was a recurring theme. From automating documentation updates to detecting anomalies in API performance, AI tools are becoming invaluable. As one participant aptly noted during the podcast, “AI can handle the repetitive tasks, but the real magic happens when humans step in to interpret and act on the data.” We also highlighted the importance of pairing AI with human oversight to address nuanced issues. As APIs underpin more AI-driven platforms, their management must become smarter and more adaptive.

Why This Matters

The API landscape continues to evolve at an unprecedented pace. As organizations integrate APIs into mission-critical systems, challenges like drift, poor documentation, and inconsistent governance can no longer be ignored. Moreover, open standards and AI tools are unlocking new possibilities, making APIs more scalable, secure, and developer-friendly.

At APIContext, we’re committed to helping organizations tackle these challenges head-on. Our tooling is designed to support robust API ecosystems, enabling innovation while ensuring reliability and trust.

Please dive deeper into these topics by listening to the full podcast episode. It’s packed with practical insights and perspectives that I think will help you navigate the complexities of API management in 2025 and beyond. Listen now.

]]>
The State of Open Banking APIs: UK’s Progress and Challenges in 2023-2024 https://apicontext.com/the-state-of-open-banking-apis-uks-progress-and-challenges-in-2023-2024/ Mon, 13 Jan 2025 01:20:07 +0000 https://apicontext.com/?p=42823 As Open Banking continues to redefine financial services worldwide, the UK remains at the forefront of this revolution. Our 2023-2024 UK Open Banking API Performance Report, conducted in partnership with Finextra, provides critical insights into how well Open Banking APIs are performing, where improvements are being made, and what challenges remain. This report, now in […]]]>

As Open Banking continues to redefine financial services worldwide, the UK remains at the forefront of this revolution. Our 2023-2024 UK Open Banking API Performance Report, conducted in partnership with Finextra, provides critical insights into how well Open Banking APIs are performing, where improvements are being made, and what challenges remain. This report, now in its fourth annual iteration, draws on over 8 million API test calls made to 29 UK banks, including the largest CMA9 banks, neobanks, and traditional financial institutions.

The findings not only highlight the UK’s continued leadership in Open Banking but also offer valuable lessons for markets globally as Open Finance initiatives take root.

Why It’s Important to Track Open Banking API Performance

Open Banking APIs are the backbone of a modern, interconnected financial ecosystem. They enable secure data sharing, foster innovation, and power new customer-centric financial services. From seamless payment processing to personalized financial management tools, these APIs underpin digital services that users increasingly rely on.

But with great potential comes great responsibility. Poor-performing APIs can cost millions in additional engineering resources, create frustrating customer experiences, and jeopardize the UK’s competitive edge. By rigorously monitoring these APIs, the report helps identify areas for improvement while benchmarking progress year over year.

Highlights from the 2023-2024 Report

This year’s report reveals both encouraging progress and areas of concern:

  • CMA9 Banks Are Closing the Gap: The UK’s largest banks have made noticeable strides, with improvements in reliability and speed that bring them closer to the performance levels of neobanks. This is a significant change from previous years, where neobanks dominated.
  • Neobanks Show Signs of Strain: While neobanks like Tide and Monzo continue to lead in availability and speed, their performance has plateaued slightly, raising questions about how these digital-first players will maintain their edge amid increasing competition.
  • Azure Cloud’s Decline Continues: For the third consecutive year, APIs hosted on Microsoft Azure lagged behind competitors like AWS and IBM. With DNS lookup times increasing by 80% compared to two years ago, Azure remains a bottleneck for financial applications.
  • Traditional Banks Improve but Lag: Although their performance has moved from the “Yellow Zone” to the “Green Zone” in our CASC scoring system, traditional banks remain slower and less reliable than their digital and CMA9 counterparts.

These findings underscore the critical role infrastructure plays in Open Banking success and the need for continued investment in modernization.

Looking Back: Key Trends Over the Years

Over the past three years, our reports have captured the evolution of Open Banking in the UK:

  • Neobanks’ Dominance: Starting as the clear leaders, neobanks established themselves as performance benchmarks for speed and reliability. However, this year suggests they may face challenges sustaining their edge.
  • CMA9’s Resilience: The largest banks have steadily improved, reflecting their investments in digital transformation and infrastructure.
  • Cloud Wars: Year after year, cloud provider performance has been a major determinant of API latency. Azure’s decline, in particular, has been a persistent issue, while AWS and IBM consistently outperform.

Why This Matters Globally

The UK’s Open Banking ecosystem serves as a template for markets in Europe, Asia, and the Americas. As countries adopt Open Finance frameworks, the UK’s lessons—both successes and challenges—offer a roadmap for building reliable, scalable, and user-centric financial ecosystems. With the UK’s leadership now under pressure, maintaining its position will require a continued focus on performance, innovation, and regulation.

Download the Full Report

The full 2023-2024 UK Open Banking API Performance Report is now available on the Finextra website. Whether you’re a financial institution, fintech, or developer, this report provides actionable insights to help you navigate the evolving Open Banking landscape.

Download the Full Report Here

]]>
There’s No AI Without APIs- Why Third-Party Monitoring is Critical https://apicontext.com/theres-no-ai-without-apis-why-third-party-monitoring-is-critical/ Thu, 19 Dec 2024 22:37:16 +0000 https://apicontext.com/?p=42799 APIs are the invisible threads weaving the fabric of today’s digital world. They power interactions between platforms, enable real-time communication, and make it possible for enterprises to innovate at speed. But as we increasingly rely on AI-driven solutions, the role of APIs becomes even more indispensable—and often underestimated. AI may be the face of innovation, […]]]>

APIs are the invisible threads weaving the fabric of today’s digital world. They power interactions between platforms, enable real-time communication, and make it possible for enterprises to innovate at speed. But as we increasingly rely on AI-driven solutions, the role of APIs becomes even more indispensable—and often underestimated.

AI may be the face of innovation, but APIs are its backbone. Whether it’s accessing a language model, integrating payment systems, or connecting with CRMs, the success of AI depends on a complex web of APIs. Yet, organizations often overlook a critical element: monitoring third-party APIs.

The Hidden Risks in AI’s Supply Chain

AI systems thrive on data, and APIs are the conduits delivering that data. From identity providers (IDPs) to third-party integrations with platforms like ServiceNow or Workday, APIs form the connective tissue of every modern application. But this reliance creates vulnerabilities.

Third-party APIs sit outside the control of the organizations that depend on them. This makes them harder to monitor and even harder to secure. If a third-party API underperforms, goes down, or exposes sensitive data, it can compromise the entire system.

For AI-driven solutions, these risks are magnified. An unmonitored API feeding inaccurate or delayed data into an AI model can skew results, disrupt workflows, or worse—introduce compliance issues that ripple through an organization.

Bringing APIs Into Focus with APIContext

At APIContext, we understand that you can’t fix what you can’t see. That’s why we specialize in proactive API monitoring, focusing on known-knowns—APIs where tests are configured in advance to simulate real-world interactions.

Our platform doesn’t just monitor performance; it offers deep visibility into API behavior. We capture every request, response, and header, creating a rich telemetry layer that empowers enterprises to identify issues before they impact users.

Through integrations with tools like Akamai API Security, we’ve extended this capability to third-party APIs, turning synthetic traffic into actionable insights. This allows businesses to treat external dependencies with the same rigor as internal APIs—ensuring a secure, resilient, and compliant API ecosystem.

There’s No AI Without APIs

In a world where AI solutions are becoming central to every industry, APIs are the unsung heroes enabling their success. But with this growing dependence comes a responsibility: to monitor every API, regardless of ownership.

The truth is simple: AI cannot function without APIs, and those APIs must be reliable, secure, and compliant. Monitoring third-party APIs isn’t just a technical necessity; it’s a strategic imperative.

At APIContext, we’re committed to providing the visibility you need to secure your API supply chain, from the core to the edge. It’s time to embrace the future of API monitoring and ensure that the APIs powering your AI are as intelligent as the systems they support.

]]>
Money20/20 Preview: The Business of APIs – Compliance Comes to Technology https://apicontext.com/money20-20-preview-the-business-of-apis-compliance-comes-to-technology/ Fri, 25 Oct 2024 13:01:00 +0000 https://apicontext.com/?p=42517 APIdays New York was a great event and gave me the opportunity to talk at length about an emerging challenge and topic – API Governance. API evangelist Kin Lane has been focused on this for a while, and rightly so, but what does it really mean and why is it now so important? The numbers […]]]>

Pulling everything together, we have a drive for Open Banking from consumers; we have a proposed and expected legislation for Open Banking in the USA from the Consumer Financial Protection Bureau (CFPB); and we have an API Standard from FDX, who we assume will be a Standards Setting Body.

But what does it mean when the regulatory rubber meets the engineering road?

Why Having a Compliance Tool Really Matters

Let’s start with the CFPB’s 1033 rule. This regulation sets the standards for how APIs should operate, from their overall quality to the security of the services they provide. More importantly, it defines what a standards-setting body (SSB) requires, and in the US, we’re probably looking at the FDX (Financial Data Exchange) standard. So, if you’re dealing with financial APIs, this is something you can’t afford to ignore.

Now, if the US regulatory approach follows its usual ‘lighter touch,’ we’re not talking about a full-on Brazilian-style top down approach. Instead, certification might happen once a year or with each major release. Sounds manageable, right? Well, not exactly. Certification is one thing, but that doesn’t mean you’re in the clear for the rest of the year. Being compliant at one point in time doesn’t mean you’re compliant forever. Continuous monitoring and validation are key because compliance isn’t static—it evolves.

The Hidden Costs of Compliance: More Than You Think

Here’s where things get tricky—and expensive. Compliance isn’t just about getting certified; it’s about staying compliant. Monthly reporting is a huge burden, especially if you look at the experience of the UK sector. Reporting isn’t just difficult; it’s costly and time-consuming. If you think you can handle all this manually, brace yourself for a shock. The 1033 rule estimates that keeping up with the requirements could cost businesses tens of thousands of dollars per year—and that’s probably a lowball estimate.

And let’s not forget that you’re juggling three major things: keeping things running, staying compliant, and meeting performance requirements. That’s a lot to manage without a solid strategy or the right tools in place.

Finally, the CFPB is expecting monthly reports on availability and performance of the services, another item that will come with a cost to the banks they don’t have to think about currently.

How APIContext Helps Simplify the Chaos

At APIContext, we’ve taken what we’ve learned from working in UK and European Open Banking and built a solution that takes the headache out of API compliance. We understand the balancing act between staying compliant and keeping things running smoothly, and that’s why we’ve designed an automated solution that takes care of the heavy lifting for you.

Our platform provides continuous, automated validation against the latest standards and specifications, so you’re not just compliant on the day of certification—you’re compliant every day. And, because we know reporting can be a huge time sink, we automate the delivery of Management Information (MI) reports. That means less manual work for your team, fewer opportunities for error, and a whole lot less stress.

In an ever-changing regulatory landscape, having a compliance tool isn’t just a ‘nice-to-have’—it’s an absolute must. As the 1033 rule and similar regulations continue to evolve, your business needs to keep up, and APIContext is here to help. We’re not just about checking boxes; we’re about making sure you can innovate while staying on the right side of compliance every step of the way.

 

 

David O’Neill is the COO of APIContext and has been working in the measurement of performance and compliance of banking APIs since the start of Open Banking. He is speaking at Money20/20 on the Panel “Are you ready for Open Banking” at Money 2020 at 9:45 Sunday October 27 2024 in Casanova 501-503.

Connect on LinkedIn or email to setup time to meet.

]]>