A terminal application for chatting with multiple LLMs in the same room. Each model keeps its own copy of the full transcript, decides independently whether to respond, and can stay silent with a single token. Responses are streamed so that partial drafts can be captured if a new message arrives mid-reply.
uv run group-chatuv run group-chat-webThen open http://127.0.0.1:8000. The UI shows a normal group chat plus a side panel with each model's live thoughts and interrupted drafts. You can add/remove models and edit names or system prompts from the config panel.
Edit MODEL_CONFIGS in src/group_chat/cli.py:
MODEL_CONFIGS = [
ModelConfig(
name="M2-Alpha",
model="MiniMax-M2.1",
api_key_env="MINIMAX_API_KEY",
base_url="https://api.minimax.io/anthropic",
),
]Each model can point at a different API key or base URL. Add more entries to scale the group.
Put your API keys in .env or export them:
export MINIMAX_API_KEY=...- Every incoming message is delivered to every model.
- A model can stay silent by replying with
<SILENCE/>. - If a new message arrives while a model is streaming a reply, the reply is interrupted and captured as an unsent draft in that model's own history, then the new message is appended and the model decides again.
- Tool calls are enabled (see
ToolRegistryinsrc/group_chat/cli.py).
/help- show commands/models- list configured models/clear- clear the screen/exitor/quit- exit