I build infrastructure for AI coding agents.
Currently working on AgentProxy β a transparent token compression proxy that sits between your agent and the Anthropic/OpenAI API. It compresses tool results before they hit the context window, so your agent solves more problems and costs less to run.
- +40% more problems solved on SWE-bench style tasks
- β25% token cost on real coding workloads
- Zero code changes β just prefix your command with
agentproxy run
pip install agentproxy
agentproxy run claude| AgentProxy β Token compression proxy for LLM agents. +40% task success, β25% cost. | cc-automode β Standalone re-implementation of Claude Code auto mode with Docker benchmark suite. |
| intro-to-agentic-security β An introduction to agentic AI security. | CloudEval-YAML β Benchmarking LLMs for cloud config generation. βοΈ |
