Stop using ChatGPT for shell commands. Type what you want in plain English, get the shell command instantly.
Live: nl2shell-web.vercel.app
A web interface for the NL2Shell model — a fine-tuned 800M parameter model (Qwen3.5-0.8B) that translates natural language to shell commands. No API keys, no subscription.
Browser → Next.js on Vercel → Gradio Space API → NL2Shell Model (HuggingFace)
↓
Supabase (feedback + analytics, optional)
| Component | Stack | Location |
|---|---|---|
| Frontend | Next.js 16, React 19, Tailwind 4, shadcn/ui | Vercel |
| API | Next.js Route Handlers (serverless) | Vercel |
| Model | Qwen3.5-0.8B + QLoRA fine-tune | HuggingFace Space |
| Database | Supabase (optional) | Supabase Cloud |
| Analytics | Vercel Analytics + Speed Insights | Vercel |
- Text input with Enter-to-submit
- Voice input via Web Speech API (Chrome/Edge)
- Dangerous command detection (22 patterns)
- Copy-to-clipboard
- Feedback collection (thumbs up/down + corrections)
- Command history (last 20, session-only)
- Structured logging (Vercel-compatible JSON)
- Security headers (CSP, HSTS, X-Frame-Options)
- Rate limiting (30 translations/min, 10 feedback/min per IP)
git clone https://github.com/nl2shell/nl2shell-web.git
cd nl2shell-web
bun install
bun devOpen http://localhost:3000.
Create .env.local:
# Optional: HuggingFace token (avoids anonymous rate limits)
HF_TOKEN=hf_your_token_here
# Optional: Supabase (enables feedback persistence + analytics)
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
# Optional: IP hashing salt for privacy
IP_SALT=your-random-saltThe app works without any environment variables. Supabase integration is opt-in — without it, feedback is logged to stdout only.
If you want to persist feedback and track usage:
- Create a Supabase project (free tier: 500MB)
- Run
supabase/schema.sqlin the SQL Editor - Add
SUPABASE_URLandSUPABASE_ANON_KEYto.env.local
Useful queries once data is flowing:
-- Daily translation count
SELECT date_trunc('day', created_at) AS day, count(*)
FROM translations GROUP BY 1 ORDER BY 1 DESC;
-- Feedback summary
SELECT rating, count(*) FROM feedback GROUP BY rating;
-- Corrections for training data
SELECT query, command, correction FROM feedback
WHERE rating = 'down' AND correction IS NOT NULL;Vercel Hobby plan (free):
- 100K function invocations/month — blocks after that (no surprise bills)
- 100GB bandwidth/month
HuggingFace Space (free CPU tier):
- Sleeps after 48h inactivity, wakes on request (~30s cold start)
- CPU inference: 1-15s per query depending on length
- Queues under load — won't cost money, just gets slow
Rate limiting:
- 30 translations per IP per minute (in-memory, resets on cold start)
- For production at scale: replace with Upstash Redis (free: 10K commands/day)
If you get a million users: The HF Space will queue and timeout before billing becomes an issue. Vercel Hobby blocks at limits. Upgrade path: Vercel Pro ($20/mo) + HF Dedicated Inference ($0.60/hr GPU).
curl -X POST https://nl2shell-web.vercel.app/api/translate \
-H "Content-Type: application/json" \
-d '{"query": "find all python files"}'Response: {"command": "find . -name '*.py'", "meta": "..."}
curl -X POST https://nl2shell-web.vercel.app/api/feedback \
-H "Content-Type: application/json" \
-d '{"query": "list files", "command": "ls", "rating": "up"}'- NL2Shell Model — The fine-tuned model
- NL2Shell Dataset — 12,834 training pairs
- Gradio Demo — Direct model interface
- Vox CLI — Terminal client
- CloudAGI — Agent Credit Economy
MIT