codex-auth-helper turns an existing local Codex auth session into a
pydantic-ai model.
It reads ~/.codex/auth.json, refreshes access tokens when needed, builds a
custom AsyncOpenAI client for the Codex Responses endpoint, and returns a
ready-to-use CodexResponsesModel.
- Reads tokens from
~/.codex/auth.json - Derives
ChatGPT-Account-Idfrom the auth file or token claims - Refreshes expired access tokens with
https://auth.openai.com/oauth/token - Writes refreshed tokens back to the auth file
- Builds an OpenAI-compatible client pointed at
https://chatgpt.com/backend-api/codex - Returns a
pydantic-airesponses model that already applies the Codex backend requirements
The helper enforces two backend-specific behaviors for you:
openai_store=False- streamed responses even when
pydantic-aicalls the non-streamedrequest()path
- It does not log you into Codex
- It does not create
~/.codex/auth.json - It does not currently support
pydantic_ai.models.openai.OpenAIChatModelor Chat Completions flows OpenAIChatModelsupport is planned for a later release- It does not replace
pydantic-ai; it only provides a model/client factory
uv pip install codex-auth-helperYou also need an existing Codex auth session on the same machine:
~/.codex/auth.json
If you have not logged in yet:
codex loginfrom codex_auth_helper import create_codex_responses_model
from pydantic_ai import Agent
model = create_codex_responses_model("gpt-5")
agent = Agent(model, instructions="You are a helpful coding assistant.")
result = agent.run_sync("Naber")
print(result.output)If you want to read a different auth file, pass a custom config:
from pathlib import Path
from codex_auth_helper import CodexAuthConfig, create_codex_responses_model
config = CodexAuthConfig(auth_path=Path("/tmp/codex-auth.json"))
model = create_codex_responses_model("gpt-5", config=config)Additional OpenAIResponsesModelSettings can still be passed through. The helper
keeps openai_store=False unless you explicitly override the model after
construction.
from codex_auth_helper import create_codex_responses_model
model = create_codex_responses_model(
"gpt-5",
settings={
"openai_reasoning_summary": "concise",
},
)If you only want the authenticated OpenAI client, use create_codex_async_openai(...):
from codex_auth_helper import create_codex_async_openai
client = create_codex_async_openai()This returns CodexAsyncOpenAI, a subclass of openai.AsyncOpenAI.
from codex_auth_helper import (
CodexAsyncOpenAI,
CodexAuthConfig,
CodexAuthState,
CodexAuthStore,
CodexResponsesModel,
CodexTokenManager,
create_codex_async_openai,
create_codex_responses_model,
)Typical failure modes:
Codex auth file was not found ...The machine is not logged into Codex yet.Codex auth file ... does not contain valid JSONThe auth file is corrupt or partially written.ModelHTTPError ... Store must be set to falseMeans you are not using the helper-backed model instance.ModelHTTPError ... Stream must be set to trueMeans you are not usingCodexResponsesModel.
This package is intentionally small and focused:
- auth file parsing
- token refresh
- Codex-specific OpenAI client wiring
pydantic-airesponses model factory