Inspiration

At the YC AI Agents Hackathon, a conversation with the Browser Use CTO (from the 70k+ star project) drove home how hard authentication still is for agents. I initially duct-taped a solution by pairing Browser Use with Agent Mail to sneak past 2FA, but it only worked when captchas were absent and wasn’t a sustainable bridge for real agent access. That led to Auth-Agent: instead of hacking around human-first auth, we’re giving AI agents their own OIDC/OAuth 2.1 infrastructure that client platforms can plug into directly.

What It Does

Think of it as “Sign in with Google” reimagined for AI agents. Our flow follows OAuth 2.1, but adapts the consent and token lifecycle to agent-to-platform interactions. We also screen for genuine AI agents by verifying “agent id” and “agent secret” (the agent’s equivalent of username/password) and enforcing a prove-you’re-an-agent challenge so automated scripts can’t impersonate real agents.

How We Built It

We engineered the OAuth 2.1 stack from the ground up, shifting human consent into the onboarding moment at auth-agent.com so live flows stay zero-interrupt. The service runs on TypeScript (Hono) deployed to Cloudflare Workers with Supabase as the data layer. To ensure only actual agents get through, we use Cloudflare Workers’ native AI model for dynamic questions over a back channel—plus a model self-identification check that must match the dev-submitted details. Agent IDs and secrets are issued after developers register on auth-agent.com (currently https://auth-agent-front-web.vercel.app/), which also serves our Next.js + shadcn + Tailwind-powered frontend. With the core protocol solidified, we shipped SDKs for developers and for client sites, now published on both npm and PyPI.

Now there is one more layer in the workflow that we added, which is prove that you are an ai agent and not an automated script. We used Cloudflare workers built in AI model to ask dynamic questions through the back channel that only ai agents could answer on the fly and no automated script could solve that. This ensures that only ai agent can bypass our auth. Additionally, we also utilize to ask the model which model is it using as a double check since we already ask to provide the model name by the dev along with agent id and agent secret. Agent ID and Agent Secret can be obtained by a dev by registering on auth-agent.com for now it is at https://auth-agent-front-web.vercel.app/. The website was built using NextJS Shadcn components, Tailwind, etc.

Once, the OAuth 2.1 flow got built, we started building the SDK for devs that will use the AI agents and the client which would integrate sign in with auth-agent on their website. The SDK is built and published as an npm package and python pypi package. Both python pypi and npm package are ready to use and available publicly.

Challenges we ran into

Rebuilt the entire OAuth 2.1 journey from scratch—front-channel, back-channel, PKCE, implicit human expectations—just to adapt it for agents, which meant absorbing every role (resource owner, client, authorization server) and weaving it into a brand-new flow. Tightened the server to run smoothly inside Cloudflare Workers while juggling dynamic CORS and scheduled cleanup (auth-server/src/index.ts:1). Coaxed Cloudflare’s AI into reliable two-line challenges with hardened parsing and graceful fallbacks when formatting slips (auth-server/src/lib/cloudflare-ai.ts:17). Balanced the agent-proofing layer so real agents pass instantly but scripted impersonators stall out (auth-server/src/lib/cloudflare-ai.ts:159).

Accomplishments that we're proud of

Delivered a full OAuth 2.1/OIDC provider for agents—from authorize to userinfo—running globally in Hono on Workers with adaptive CORS and automated session cleanup (auth-server/src/index.ts:1). Built a dynamic verification loop that marries Cloudflare AI prompts with structured answer validation and deterministic fallbacks (auth-server/src/lib/cloudflare-ai.ts:17).

Shipped ready-to-use SDKs and an end-to-end Python test harness so developers can exercise the entire flow without touching raw HTTP (examples/test-agent-auth.py:1).

Packaged a branded “Sign in with Auth-Agent” experience plus a registration funnel, letting partner sites integrate agent login with almost no friction.

What we learned

Agent consent has to move upstream into onboarding; otherwise OAuth’s “prompt for user approval” assumptions break the automated flow.

Edge runtimes force disciplined modularity—each shared helper doubles as a security boundary and a latency win.

Even strong LLMs need strict output shaping for security prompts; without sanitizers the answers drift into narrative.

Trust hinges on parity: SDKs, docs, and verification tooling all have to mirror the exact flow the Workers code enforces.

What's next for Auth-Agent

Focus on hardening the stack for production—more observability, resilience, and guardrails so it’s battle-ready, not just a functional demo.

Expand the challenge catalog and add adaptive difficulty tied to risk scoring so high-value flows get extra scrutiny.

Launch a self-serve dashboard for credential rotation, client management, and richer audit trails. Lock in flagship integrations where “Sign in with Auth-Agent” becomes the default agent login, turning today’s milestone into real-world adoption.

Built With

Share this project:

Updates