A zero-trust firewall for autonomous AI intent.
IntentGuard is a production-grade system that continuously monitors AI agents for intent drift, detecting when benign workflows evolve into fraud, social engineering, or policy-violating behavior—before damage occurs.
New here?
- First time? → GETTING_STARTED.md - Choose your learning path
- Just want to run it? → QUICKSTART.md - 5-minute setup guide
IntentGuard wraps your AI agents and:
- Intercepts tool calls before execution
- Infers intent using Google's Gemini API
- Detects drift by comparing current intent against baseline
- Enforces policies with allow/warn/block decisions
- Escalates to human review when needed
Follow these steps to get IntentGuard running locally.
New to this?
- Start here: QUICKSTART.md - Step-by-step guide for getting started
npm install # Install dependencies
cp .env.example .env # Create config file
# Edit .env and add your GEMINI_API_KEY
npm start # Start server (keep running)
# In another terminal:
npm run example # Run example
# Or: python3 -m http.server 8000 # Then open http://localhost:8000Make sure you have:
- Node.js 18+ installed (check with
node --version) - npm installed (comes with Node.js, check with
npm --version) - A Google Gemini API key (Get one here)
Open your terminal in the IntentGuard directory and run:
npm installThis will install all required packages. You should see a node_modules/ folder created.
Expected output: You'll see a lot of package installation messages. Wait until it finishes (may take 1-2 minutes).
Verify installation: After installing, run:
npm run verifyThis will check if everything is set up correctly.
-
Copy the example environment file:
cp .env.example .env
-
Open
.envin a text editor (VS Code, nano, vim, or any text editor) and find this line:GEMINI_API_KEY=your_gemini_api_key_hereReplace
your_gemini_api_key_herewith your actual API key from Google.Example:
GEMINI_API_KEY=AIzaSyDvSycUiDL52RnWWR89HLdPTWeakh7yYi0 -
Save the file.
Important:
- Never commit your
.envfile to git (it's already in.gitignore)- Keep your API key secret
- The
.env.examplefile is just a template - your real key goes in.env
In your terminal, run:
npm startYou should see output like:
[2026-01-24 ...] [server] info: Storage initialized
[2026-01-24 ...] [server] info: IntentGuard server running on port 3000
Keep this terminal window open - the server needs to keep running.
Tip: If you want auto-reload during development, use
npm run devinstead.
Open a new terminal window (keep the server running in the first one) and test:
curl http://localhost:3000/healthYou should see:
{"status":"healthy","service":"IntentGuard","version":"1.0.0",...}If you see this, the server is working!
Option A: Node.js Example
In a new terminal (server still running in the first one):
npm run exampleYou should see the example agent running and IntentGuard detecting intent drift.
Option B: Python Example
-
Install Python dependencies:
cd examples pip install -r requirements.txt -
Run the example:
python3 python_example.py
-
Open a new terminal (server still running)
-
Start a web server to serve the frontend:
# Option 1: Python python3 -m http.server 8000 # Option 2: Node.js (if you have npx) npx serve .
-
Open your browser and go to:
http://localhost:8000 -
Scroll down to the "Demo" section and click "Run Simulation"
-
Watch IntentGuard detect intent drift in real-time!
Solution: Make sure you're in the IntentGuard directory and run npm install again.
Solution:
-
Test your .env file:
npm run test-env
This will tell you exactly what's wrong.
-
Check .env file format:
- Make sure it's:
GEMINI_API_KEY=your_key_here(no spaces, no quotes) - File should be in project root (same folder as
package.json)
- Make sure it's:
-
Restart the server after making changes:
- Stop server (Ctrl+C)
- Run
npm startagain
-
See TROUBLESHOOTING.md for detailed help
Solution:
- Change the port in
.env:PORT=3001 - Or stop whatever is using port 3000
- Restart the server
Solution:
- Make sure the backend server is running (
npm start) - Check
http://localhost:3000/healthin your browser - The demo will fall back to mock mode if backend is unavailable
Solution:
- Check your Gemini API key is correct in
.env - Make sure you have internet connection (Gemini API requires it)
- Check server logs for detailed error messages
Solution:
# Create the data directory
mkdir -p dataThen restart the server.
Start server:
npm startVerify setup:
npm run verifyTest environment variables:
npm run test-envRun example:
npm run exampleView web demo:
# Terminal 1: Start backend
npm start
# Terminal 2: Start web server
python3 -m http.server 8000
# Browser: Open http://localhost:8000Check if server is running:
- Visit: http://localhost:3000/health
- Should see:
{"status":"healthy",...}
Once you have it running:
- Read the code: Check out
src/directory to see how it works - Try the examples: Modify
examples/to test different scenarios - Read architecture: See ARCHITECTURE.md for deep dive
- Integrate your agent: Use the wrapper in your own projects
import { IntentGuardWrapper } from './src/wrapper/intentguard-wrapper.js';
// Initialize wrapper
const wrapper = new IntentGuardWrapper({
apiUrl: 'http://localhost:3000',
agentName: 'my-agent',
baselineIntent: 'assist users with customer support'
});
await wrapper.initialize();
// Wrap your tool functions
const safeSendEmail = wrapper.wrapTool('send_email', sendEmailFunction, {
sensitivity: 'medium'
});
// Use normally - IntentGuard intercepts automatically
await safeSendEmail({ to: '[email protected]', subject: 'Hello' });from examples.python_example import IntentGuardWrapper
wrapper = IntentGuardWrapper(api_url="http://localhost:3000")
wrapper.initialize(
agent_name="my-agent",
baseline_intent="assist users with customer support"
)
# Wrap tools
safe_query = wrapper.wrap_tool("query_db", query_database, sensitivity="high")
# Use normally
result = safe_query(fields=["email", "name"], justification="support ticket")See ARCHITECTURE.md for detailed architecture documentation.
High-level flow:
Agent → IntentGuard Wrapper → Backend API
↓
Gemini API (Intent Inference)
↓
Policy Engine (Allow/Warn/Block)
↓
Decision → Agent
Environment variables (see .env.example):
PORT- Server port (default: 3000)GEMINI_API_KEY- Your Gemini API key (required)DRIFT_THRESHOLD_ALLOW- Drift score threshold for allow (default: 0.3)DRIFT_THRESHOLD_WARN- Drift score threshold for warn (default: 0.6)DB_PATH- SQLite database path (default:./data/intentguard.db)LOG_LEVEL- Logging level (default:info)
POST /api/sessions- Create a new sessionGET /api/sessions/:sessionId- Get session detailsGET /api/sessions- List sessions
POST /api/tool-calls- Intercept and evaluate a tool callGET /api/tool-calls/:sessionId- Get tool call history
POST /api/intent/infer- Manually trigger intent inferenceGET /api/intent/:sessionId/history- Get intent inference history
GET /api/policy/config- Get policy configurationGET /api/policy/:sessionId/decisions- Get policy decision history
GET /health- Health check endpoint
# Build image
docker build -t intentguard .
# Run container
docker run -p 3000:3000 \
-e GEMINI_API_KEY=your_key_here \
intentguard
# Or use docker-compose
docker-compose up# Run tests (when implemented)
npm test
# Lint code
npm run lintIntentGuard/
├── src/
│ ├── server/ # Express backend
│ ├── intent/ # Gemini integration
│ ├── policy/ # Policy engine
│ ├── wrapper/ # Agent wrapper
│ ├── storage/ # Database layer
│ └── utils/ # Utilities
├── examples/ # Example agents
├── tests/ # Tests
├── docs/ # Documentation
└── data/ # SQLite database (gitignored)
- Never commit API keys - Use
.envfile (gitignored) - Input validation - All inputs are validated
- SQL injection protection - Using parameterized queries
- CORS - Configured for security
- Error handling - No sensitive data in error messages
# Install dependencies
npm install
# Run in development mode (auto-reload)
npm run dev
# Check code style
npm run lintMIT License - see LICENSE file.
This is a hackathon project. Contributions welcome!
- Built with Google Gemini
- Inspired by zero-trust security principles
- Designed for the future of autonomous AI agents
Built for AI safety in the next 5–10 years.