|
Self-improving memory module for Omni-Avatar. |
Automatic extraction and real-time matching of user full modality persona. |
|
An Optimizer for Omni-Avatar that can automatically build an internal knowledge base for avatars. |
Agents need to plan over a longer time frame to ensure that their actions are sequential and reliable. |
|
Controls AlphaAvatarβs behavior logic and process flow. |
The real-time generated virtual character that visually represents the Avatar during interactions. |
|
Allow AlphaAvatar to access the network and perform single-step/multi-step inference through a separate Agent service to search for more accurate content. |
Allow AlphaAvatar to access Documents/Skills (user-uploaded/generated by the Reflection module/URL access) to obtain document-related information. |
|
Allows AlphaAvatar to access real-world external tools, such as databases, email, social media, etc. |
Provide AlphaAvatar with a sandbox environment to interact with the external world or with other agents, thereby enabling multi-agent interaction and exploration. |
- [2026/03] We have released AlphaAvatar version 0.5.0 to support the MCP plugin, which enables retrieval and concurrent invocation of the MCP tools.
- Released AlphaAvatar version 0.5.1: Added WhatsApp channel support via Baileys driver, enabling connection to AlphaAvatar Agent for WhatsApp integration.
- [2026/02] We have released AlphaAvatar version 0.4.0 to support RAG by RAG-Anything library and optimized the Memory and DeepResearch modules.
- Released AlphaAvatar version 0.4.1: Fix the Persona plugin bugs and Add new MCP plugin.
- [2026/01] We have released AlphaAvatar version 0.3.0 to support DeepResearch by tavily API.
- Released AlphaAvatar version 0.3.1: ADD tool calls during user-Assistant interactions to the Memory module.
- [2025/12] We have released AlphaAvatar version 0.2.0 to support AIRI live2d-based virtual character display.
- [2025/11] We have released AlphaAvatar version 0.1.0 to support automatic memory extraction, automatic user persona extraction and matching.
Install stable AlphaAvatar version from PyPI:
uv venv .my-env --python 3.11
source .my-env/bin/activate
pip install alpha-avatar-agentsInstall latest AlphaAvatar version from GitHub:
git clone --recurse-submodules https://github.com/AlphaAvatar/AlphaAvatar.git
cd AlphaAvatar
uv venv .venv --python 3.11
source .venv/bin/activate
uv sync --all-packagesStart your agent in dev mode to connect it to LiveKit and make it available from anywhere on the internet.
π§© Step 1. Configure Environment Variables
cd AlphaAvatar
# Copy template
cp .env.template .env.devEdit .env.dev and set required environment variables.
π¦ Step 2. Download Required Files
alphaavatar download-filesπ Step 3. Run the Agent
ENV_FILE=.env.dev alphaavatar dev examples/agent_configs/pipline_openai_airi.yaml
# or
ENV_FILE=.env.dev alphaavatar dev examples/agent_configs/pipline_openai_tools.yamlTo see more supported modes, please refer to the LiveKit doc.
To see more examples, please refer to the Examples README
AlphaAvatar supports multiple Access Channels, allowing different types of users β from end users to developers β to interact with the system.
AlphaAvatar Runtime
βββββββββββββββββββ
ββββββββββββββββββββββββββββββββ
β AgentSession β
β AvatarEngine β
β (LLM / Memory / RAG / MCP) β
ββββββββββββββββ¬ββββββββββββββββ
β
InputDispatcher
β
InputEnvelope
β
ββββββββββββββββ΄ββββββββββββββββ
β β
Channel Adapters Native Inputs
(Ingress Layer) (Web / App)
β β
βΌ βΌ
WhatsApp / WeChat / Slack audio / text / video
β β
ββββββββββββββββ¬ββββββββββββββββ
βΌ
OutputDispatcher
β
ββββββββββββββββ΄ββββββββββββββββ
β β
Channel Egress Native Output
(Messaging APIs) (WebRTC / UI)
π‘ AlphaAvatar uses a Channel Adapter architecture to decouple runtime logic from communication channels.
π₯οΈ Browser-based interface for real-time interaction. This will become the official AlphaAvatar user interface.
- ποΈ Real-time voice & multimodal communication
- π§ Full plugin support (Memory / RAG / MCP / etc.)
- π Virtual character display
Interact with AlphaAvatar directly inside messaging platforms.
Capabilities:
- π¬ Text-based conversation
- π€ Voice message interaction
- π§° Tool invocation via chat interface
π¦ Channel introduction: README
Make sure AlphaAvatar Agent is already running (see Quick Start above).
ENV_FILE=.env.dev sh examples/channels/start_whatsapp.shπ‘ The WhatsApp channel runs as an independent bridge process and connects to the Agent runtime.
A dedicated AlphaAvatar mobile application providing:
- ποΈ Real-time voice communication
- π Live2D / Virtual character visualization
- π§ Persistent memory & persona
This is the primary access channel for AlphaAvatar today.
Developers can immediately access AlphaAvatar via the LiveKit Playground.
π https://agents-playground.livekit.io/
After starting your AlphaAvatar server:
- Connect to your LiveKit instance
- Configure the Agent name in the Playground (must match
avatar_name, default:Assistant) to enable Explicit Dispatch. - Connect to the agent room
- Start testing real-time interaction
Supported capabilities:
- ποΈ Voice interaction
- π§ Memory extraction
- π RAG retrieval
- π§° MCP tool invocation
- π Virtual character display
π‘ AlphaAvatar is currently developer-first. Web and mobile experiences are actively under development.

