A sarcastic Slack bot that corrects people when they're wrong. Tag @Samaraj on a message and it will deliver a factual correction with dry wit and a subtle burn.
Powered by a self-hosted Llama 3.2 3B model — no third-party AI APIs involved.
- Someone posts something wrong in a channel
- You reply to that message with
@Samaraj - Samaraj reads the message, generates a snarky correction, and replies in the thread
You can also DM Samaraj directly — just send a message and it'll correct you.
User: Python was created in 1995
Samaraj: Python was actually created in 1991 by Guido van Rossum. I'm not sure what decade they were counting on, but it's not 1995.
- Python with slack_bolt (Socket Mode)
- Llama 3.2 3B via llama-cpp-python (CPU inference)
- GGUF quantized model (Q4_K_M, ~2 GB) from HuggingFace
- Docker container deployed on Railway
- Python 3.11+
- A HuggingFace account (free)
- A Slack workspace where you can install apps
- A Railway account (for deployment)
git clone <your-repo-url>
cd samaraj
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txtGo to meta-llama/Llama-3.2-3B-Instruct on HuggingFace and accept Meta's license agreement.
Go to huggingface.co/settings/tokens and create a Read token.
- Go to api.slack.com/apps > Create New App > From scratch
- Name it "Samaraj", select your workspace
- Socket Mode > Enable > Create app-level token (name:
samaraj-socket, scope:connections:write) > Save the token asSLACK_APP_TOKEN - OAuth & Permissions > Add Bot Token Scopes:
app_mentions:readchat:writechannels:historyim:history
- Event Subscriptions > Enable > Subscribe to bot events:
app_mention,message.im - App Home > Enable Messages Tab > Check "Allow users to send Slash commands and messages from the messages tab"
- Install to Workspace > Copy the Bot User OAuth Token as
SLACK_BOT_TOKEN - Invite
@Samarajto channels:/invite @Samaraj
cp .env.example .envFill in your .env:
SLACK_BOT_TOKEN=xoxb-your-bot-token
SLACK_APP_TOKEN=xapp-your-app-token
HF_TOKEN=hf_your_huggingface_token
MODEL_PATH=models/Llama-3.2-3B-Instruct-Q4_K_M.gguf
python scripts/download_model.pyThis downloads ~2 GB.
python -m src.app- Push the repo to GitHub
- Go to railway.app > New Project > Deploy from GitHub Repo
- Add environment variables in the Railway dashboard:
SLACK_BOT_TOKENSLACK_APP_TOKENHF_TOKEN
- Deploy — Railway builds the Docker image and downloads the model during build
Test Samaraj's responses without the Slack connection:
python scripts/test_locally.py "The Great Wall of China is visible from space"├── src/
│ ├── app.py # Slack event handler (main entrypoint)
│ ├── model.py # Llama model wrapper
│ └── prompts.py # Samaraj's personality prompt
├── tests/ # Unit tests
├── scripts/
│ ├── download_model.py # Download model from HuggingFace
│ └── test_locally.py # Test without Slack
├── Dockerfile # Container with model baked in
└── railway.toml # Railway deployment config
python -m pytest tests/ -v