You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All runtime settings are loaded from environment variables via src/config.ts using dotenv. Copy .env.example to .env and set values before starting the agent.
LLM / Provider
Variable
Default
Type
Description
MISTRAL_API_KEY
(required)
string
Mistral API key. Only required when LLM_PROVIDER=mistral.
LLM_PROVIDER
mistral
string
LLM provider. Currently only mistral is supported.
LLM_MODEL
(empty)
string
Model name to pass to the provider SDK. Leave empty to use the provider's default.
LLM_TEMPERATURE
0.7
float
Sampling temperature (0 = deterministic, 1 = creative).
Agent Loop
Variable
Default
Type
Description
MAX_ITERATIONS
20
int
Maximum LLM iterations per invocation before the loop is aborted with a warning.
MAX_TOKENS_BUDGET
0
int
Token budget reserved for future context-window management. 0 = disabled.
MAX_CONTEXT_TOKENS
28000
int
Maximum tokens in the context window (system prompt + history). Messages are trimmed to stay within this limit.
SYSTEM_PROMPT_PATH
(empty)
string
Optional path to a .txt or .md file that overrides the generated system prompt.
Print LLM response tokens as they arrive in the CLI REPL. Uses the streaming agent loop when enabled.
Prompts and Instructions
Variable
Default
Type
Description
INSTRUCTIONS_ROOT
same as WORKSPACE_ROOT
string
Root directory for discovering instruction files (.instructions.md, AGENTS.md, etc.).
PROMPT_TEMPLATES_DIR
(empty)
string
Directory containing .md/.txt prompt template files loaded into PromptRegistry at startup.
PROMPT_HISTORY_FILE
(empty)
string
Path to a JSON file where prompt template version history is persisted. Leave empty to disable persistence.
PROMPT_CONTEXT_REFRESH_MS
5000
int
TTL in milliseconds before runtime context (workspace, tools, instructions) is rebuilt. 0 = rebuild on every LLM call.
Skills and Agent Profiles
Variable
Default
Type
Description
SKILLS_DIR
(empty)
string
Directory to auto-load *.skill.md skill files from at startup.
AGENT_PROFILES_DIR
(empty)
string
Directory to auto-load *.agent.json / *.agent.yaml agent profile files from at startup.
Observability and Tracing
Variable
Default
Type
Description
TRACING_ENABLED
false
bool
Write a JSON trace file per agent invocation to TRACE_OUTPUT_DIR.
TRACE_OUTPUT_DIR
./traces
string
Directory where invocation trace JSON files are written. Created if missing.
TRACING_COST_PER_INPUT_TOKEN_USD
0
float
USD cost per input (prompt) token for cost estimation. 0 = disabled.
TRACING_COST_PER_OUTPUT_TOKEN_USD
0
float
USD cost per output (completion) token for cost estimation. 0 = disabled.
Runtime Context
Variable
Default
Type
Description
RUNTIME_CONTEXT_ENABLED
true
bool
When true, the system prompt includes the current date/time, OS platform, and Node.js version. Set to false for fully deterministic or snapshot-tested runs.
Log destination when LOG_FILE is not set: stdout or stderr.
LOG_FILE
(empty)
string
When set, all log output is written to this file instead of stdout/stderr. Recommended for production and interactive CLI use to keep the terminal clean.
LOG_NAME
agentloop
string
Logger name included in every log record.
LOG_TIMESTAMP
true
bool
Include an ISO timestamp in every log record.
LLM Response Recording (Testing)
Variable
Default
Type
Description
RECORD_LLM_RESPONSES
false
bool
Record real LLM API responses as JSON fixture files for later replay in tests.
LLM_FIXTURE_DIR
tests/fixtures/llm-responses
string
Directory where recorded fixture files are stored and MockChatModel.fromFixture() reads from.