Skip to content

tars90percent/true-groupchat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Group Chat CLI

A terminal application for chatting with multiple LLMs in the same room. Each model keeps its own copy of the full transcript, decides independently whether to respond, and can stay silent with a single token. Responses are streamed so that partial drafts can be captured if a new message arrives mid-reply.

Quick start

uv run group-chat

Web UI

uv run group-chat-web

Then open http://127.0.0.1:8000. The UI shows a normal group chat plus a side panel with each model's live thoughts and interrupted drafts. You can add/remove models and edit names or system prompts from the config panel.

Configure models

Edit MODEL_CONFIGS in src/group_chat/cli.py:

MODEL_CONFIGS = [
    ModelConfig(
        name="M2-Alpha",
        model="MiniMax-M2.1",
        api_key_env="MINIMAX_API_KEY",
        base_url="https://api.minimax.io/anthropic",
    ),
]

Each model can point at a different API key or base URL. Add more entries to scale the group.

Environment

Put your API keys in .env or export them:

export MINIMAX_API_KEY=...

Behavior notes

  • Every incoming message is delivered to every model.
  • A model can stay silent by replying with <SILENCE/>.
  • If a new message arrives while a model is streaming a reply, the reply is interrupted and captured as an unsent draft in that model's own history, then the new message is appended and the model decides again.
  • Tool calls are enabled (see ToolRegistry in src/group_chat/cli.py).

Commands

  • /help - show commands
  • /models - list configured models
  • /clear - clear the screen
  • /exit or /quit - exit

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors