A lightweight, privacy-friendly chatbot powered by local LLMs using Ollama and Streamlit.
Runs entirely offline, directly on your Mac β no API keys, no cost, full control.
This project demonstrates how to build a fully local AI assistant that uses open-source Large Language Models.
The chatbot is built with Python, Streamlit, and Ollama, providing a real-time conversational UI similar to ChatGPT β but everything runs on your machine.
- π§© Local inference with
llama3,mistral, or any Ollama-compatible model - β‘ Instant setup β no API keys, no billing, no internet required
- π§ Conversational memory within the chat session
- π» Built with Streamlit for a clean web interface
- π§° Extensible architecture β easy to integrate new models or APIs later
Still in development...