Vivid Agent is an extensible, open‑source framework for building autonomous AI agents in TypeScript. It combines persistent memory, tool execution, and a flexible AI engine to create agents that can run tasks with minimal supervision—or respond to user requests on demand.
🚧 Actively developed – we're looking for contributors to help shape the future of autonomous agents!
- Memory system – Long‑term (Markdown file) and short‑term (in‑memory context) memories.
- Tool framework – Easily add tools (file manipulation, command execution, etc.) with sandboxing.
- AI engine abstraction – Use NVIDIA's models, a mock engine for testing, or plug in your own.
- Workspace isolation – All files and logs are contained in a dedicated directory.
- Logging – Coloured console output + file logs with multiple levels.
We aim to build a reliable, self‑hosted agent that can:
- Run autonomously in the background, performing scheduled tasks.
- Learn from interactions via long‑term memory.
- Be extended with custom tools and skills by the community.
git clone https://github.com/vividorg/agent.git
cd agent
npm install
npm run build
chmod +x dist/index.js
npm linkStart the service and open an interactive session:
vivid service # start the agent service
vivid tui # open interactive prompt (/exit to quit)Or send a one-shot prompt:
vivid tui -m "Hello, what can you do?"Without
npm link: usenpm run serviceandnpm run tuiinstead ofvivid service/vivid tui.
Default service URL is http://127.0.0.1:3100, configurable via --url or VIVID_SERVICE_URL.
| Command | Description |
|---|---|
vivid service |
Start HTTP service for incoming prompts (POST /prompt) |
vivid tui |
Open interactive CLI prompt window |
vivid tui -m "prompt" |
Send one prompt and exit |
vivid service --mock |
Start with mock AI engine (no API key needed) |
vivid service --engine nvidia|llama|mock |
Choose AI provider |
- Start
llama.cppwith an OpenAI-compatible endpoint:
./llama-server -m /path/to/model.gguf --host 0.0.0.0 --port 8080- Copy
.env.exampleto.envand configure:
AI_ENGINE=llama
LLAMA_BASE_URL=http://127.0.0.1:8080
LLAMA_MODEL=local
LLAMA_MAX_TOKENS=4096
NVIDIA_API_KEY=nvapi-key # not required for llama engine- Run:
vivid service
vivid tui -m "Hi there, what can you do?"docker compose up -d --build
vivid tui -m "Hi there, what can you do?"Agent data is persisted in ./data/ (VIVID_HOME=/data inside the container).
vivid service & # simple background startOr with PM2 for process management:
npm run pm2:start
pm2 status
npm run pm2:stopStorage path is configurable via VIVID_HOME (default: ./.vivid in the working directory).
Development happens in the open, and we'd love your help!
- Discord – real‑time discussions, support, and ideas.
- GitHub Issues – report bugs or suggest features.
- Contributing guide – look for good first issues to get started.