A fully private, 100% offline AI chat assistant that runs on your local machine using Ollama and Streamlit or Flask.
Chat with your PDF, TXT, and Markdown files safely and locally.
- π΄ 100% Offline β No data leaves your computer
- π RAG (Retrieval Augmented Generation) β Chat with your documents
- β‘ Fast & Efficient β Uses
llama3.2:1bmodel - π§ Persistent Memory β Local storage via
ChromaDB - π±οΈ Multiple Interfaces β Streamlit (web) & Flask Cyberpunk (modern UI)
- π§ Cross-Platform β Works on Windows, macOS, and Linux
- Python 3.8+
- Ollama (ollama.com)
- Git (to clone the repository)
-
Clone the Repository
git clone https://github.com/code-glitchers/IncognitoAI.git cd IncognitoAI -
Run Setup (Choose your platform)
- Windows: Double-click
START_PRIVATEAI.bat - Linux/macOS:
cd linux && chmod +x setup.sh && ./setup.sh
- Windows: Double-click
-
Start Ollama (in a separate terminal)
ollama serve
-
Launch the App
- Streamlit Version:
streamlit run app.py
- Flask Cyberpunk Version (Linux):
cd linux && chmod +x bot.sh && ./bot.sh
- Streamlit Version:
Run the one-click installer:
START_PRIVATEAI.batFollow the Linux/macOS Installation & Setup Guide
- Open
http://localhost:8501 - Upload PDF, TXT, or Markdown files
- Toggle RAG mode to search documents or ask general questions
- Open
http://localhost:5000 - Dark-themed neon aesthetic
- Real-time streaming responses
- Toggle between RAG and general chat modes
IncognitoAI/
βββ app.py # Streamlit application
βββ requirements.txt # Python dependencies
βββ START_PRIVATEAI.bat # Windows launcher
βββ linux/
β βββ bot.py # Flask Cyberpunk app
β βββ setup.sh # Linux setup script
β βββ start.sh # Start Streamlit (Linux)
β βββ bot.sh # Start Flask app (Linux)
β βββ templates/ # Flask HTML templates
β βββ static/ # CSS and JavaScript
β βββ README.md # Linux-specific guide
βββ .chroma_db/ # Local vector database (auto-created)
- LLM:
llama3.2:1b- Fast, efficient language model - Embeddings:
all-minilm:latest- Fast embedding model for RAG
Models are automatically downloaded on first run via Ollama.
We welcome contributions! Feel free to:
- Report bugs and issues
- Suggest new features
- Submit pull requests
- Improve documentation
MIT License - feel free to modify and distribute!
For questions or support, open an issue on GitHub.
