-
-
`prep` cmd for meeting talking points to go!
-
`record` cmd to use to let InnerBoard track the terminal usage in the background!
-
`health` cmd to check the system health of end-to-end workflow functionality ensuring security, performance and reliability!
-
`list` cmd for previewing your personal vault of reflections and notes!
Inspiration
A 100% offline meeting prep assistant that turns your console history into team/manager-ready updates.
InnerBoard-local analyzes your terminal sessions and produces concise, professional talking points for your next standup or 1:1. Everything runs locally on your machine—no data ever leaves your device.
What it does
- Private journaling → structured signals. Extracts key points, blockers, and resources.
- Console activity analysis. Turns raw terminal logs into structured “sessions” you can summarize or share.
- Actionable micro-advice. Suggests next steps and checklists tied to what you just wrote or ran.
- Shareable summaries. Auto-generate weekly reviews, stand-up notes, or personal status updates.
- Zero-egress by design. Encrypted local vault; LLM runs via Ollama on localhost.
- Modern CLI UX. Rich tables, progress bars, and friendly errors.
How we built it
- Local LLMs via Ollama (
gpt-oss:20b) with connection pooling and TTL caching. - Security: PBKDF2 (100k iterations), Fernet (AES-128) at rest, SHA-256 integrity checks, strict input validation, loopback-only networking.
- Storage: Encrypted SQLite vault with safe file ops and secure deletion.
- CLI: Python + Click + Rich; config via python-dotenv; Docker support.
- Quality: 50 tests, all green across security, caching, integration, and network safety.
Why gpt-oss
- Open weights + local inference: Easy to run and audit offline via Ollama.
- Reasoning-first: Handles multi-step extraction from messy shell traces.
- Composable prompts: Stable JSON contracts enable “reflection → advice → summary.”
Challenges we ran into
- Enforcing true zero-egress while keeping local LLMs fast and responsive.
- Normalizing noisy shell logs across OSes and shells.
- Designing stable JSON schemas so “reflection → advice → summary” composes reliably.
- Making key management secure-by-default yet smooth in daily use.
Accomplishments we’re proud of
- A fully offline pipeline with encryption, integrity checks, and strict network isolation.
- Structured outputs that drop straight into weekly reviews or stand-ups.
- A delightful CLI with rich UI and clear, actionable errors.
- Comprehensive tests and a clean developer experience.
What we learned
- Local-first changes user behavior: provable privacy → more honest, useful entries.
- Terminal traces + prose = a trustworthy timeline of work.
- Prompt contracts (schemas + validators) matter as much as model choice.
- DX (helpful errors, sane defaults) drives consistent reflection habits.
What’s next for InnerBoard-local
- Desktop/Streamlit UI on top of the encrypted vault.
- VS Code integration for in-context capture and summaries.
- Policy sandbox (MCP-style) and redact-on-export.
- Streaming outputs with SSE/WebSockets and backpressure.
- Privacy-preserving analytics (all local, opt-in).
- One-click installers (brew/winget) and model presets.
Demo (≤3 min)
- Record your work:
innerboard record…exit(all input/output saved locally with timing). - Automatic chunking: idle > 15 min or > 1000 lines triggers processing.
- Local inference:
gpt-oss:20bvia Ollama (localhost) generates SRE (summary, successes, blockers, resources). - One-command prep:
innerboard prep(or--show-sre) outputs Team, Manager, and Recommendations. - Private notes:
innerboard add "..."stored in an encrypted local SQLite vault and included in prep. - Privacy proof: zero egress, loopback-only calls;
pytest tests/test_no_network.pypasses offline. - Category: Best Local Agent (100% offline, encrypted, OSS-powered).
Local Use
For using our software, please follow the instructions at https://github.com/ramper-labs/InnerBoard-local/tree/main?tab=readme-ov-file#-getting-started
Repository
- Code:
https://github.com/ramper-labs/InnerBoard-local - License: Apache-2.0
- README includes model setup, testing instructions, and sample outputs.
Log in or sign up for Devpost to join the conversation.