About the Project

The Problem The rapid rise of fully autonomous AI assistants has demonstrated how powerful agent-based systems can be. However, many of these tools operate with broad, often unrestricted access to a user’s local machine. While this enables flexibility and automation, it also introduces real risks: runaway processes, unintended file modifications, excessive API spending, and limited visibility into what the agent is actually doing.

Additionally, many of these systems require complex setup and deep technical knowledge, making them inaccessible to non-technical users. We identified a gap between powerful autonomy and practical, controlled usability.

Our Solution

Agent Locker is a sandbox for AI agents.

It sandboxes AI agents into isolated Docker containers, thereby limiting the scope of what the agent can access and modify within your local machine. Instead of giving an agent direct access to your host environment, we create a contained execution space where its capabilities are explicitly defined before it runs.

Each agent operates inside a controlled runtime with: A defined filesystem A constrained tool surface (fs.list, fs.read, fs.write, shell.exec) Explicit lifecycle management (create, pause, resume, delete)

This architecture introduces a practical surveillance and containment layer. All tool invocations are routed through the application server, actions can be logged and inspected, and resource boundaries are enforced at the container level.

While this does not make the system completely secure, it meaningfully reduces risk exposure and provides operational visibility into what an agent is doing.

How We Built It

Agent Locker is built as an Electron desktop application backed by a Node.js/Express server that programmatically orchestrates Docker containers.

Docker provides the isolation layer for each agent instance. Supabase handles authentication and session management. Local models (Ollama) or external providers (OpenAI, Anthropic) can be selected dynamically. A containerized orchestration agent runs inside each sandbox and executes tools within defined boundaries. Speech integration enables conversational, hands-free interaction.

When a user creates a “locker,” the backend builds (if needed) and spins up a container with preconfigured tools. The chat interface communicates with that container through a controlled execution pipeline.

Challenges We Faced

One of the biggest challenges was making Docker orchestration seamless across operating systems. Ensuring compatibility across Windows, macOS, and Linux required handling platform-specific Docker socket configurations and container networking differences.

Packaging the Electron app for distribution while maintaining reliable local Docker integration was also complex.

Another technical challenge was designing persistent lockers — allowing containers to maintain state over time without sacrificing isolation, stability, or resource efficiency.

What We Learned

Through building Agent Locker, we learned how to:

Automate Docker container lifecycle management

Structure secure tool invocation pipelines

Run and orchestrate local LLMs efficiently

Integrate authentication and session persistence

Bridge desktop applications with containerized execution environments

Most importantly, we developed a deeper understanding of how to balance AI autonomy with structured guardrails.

What’s Next

Next, we plan to expand the tool ecosystem, improve monitoring and transparency features, and refine the overall user experience.

Our goal is to make Agent Locker the most accessible sandbox for AI agents — giving users powerful autonomy within a clearly defined, observable, and controlled execution environment.

Built With

+ 3 more
Share this project:

Updates