- Docker installed on your system
- Basic knowledge of Docker commands
Run the following command in the same directory as the Dockerfile:
docker build -t ai-chatbot .To run Ollama with the LLaMA 3.2:1B model, execute:
docker run -d --name ollama -p 11434:11434 ollama/ollama:latestIf you have GPU in your machine then use
docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollamaPull and prepare the LLaMA 3.2:1B model:
docker exec -it ollama ollama pull llama3.2:1bOnce Ollama is running, start the chatbot container:
docker run -d --name ai-chatbot -p 8501:8501 ai-chatbotGet the Container IP of the Ollama Container
Get the Container ID
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2cf4d51a43c9 ollama/ollama "/bin/ollama serve" 2 minutes ago Up 2 minutes 0.0.0.0:11434->11434/tcp ollama
Get the IP
In Windows
docker inspect 2cf4d51a43c9 | findstr IPAddress
On Linux/Mac
docker inspect 2cf4d51a43c9 | grep IPAddress
Open your browser and visit:
http://localhost:8501
Update the Backend URL
For CPU Only
docker exec ollama ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2:1b baf6a787fdff 2.2 GB 100% CPU 4 minutes from now
With GPU support
docker exec ollama ollama ps
NAME ID SIZE PROCESSOR UNTIL
llama3.2:1b baf6a787fdff 2.7 GB 100% GPU 4 minutes from now
To stop and remove all running containers:
docker stop ai-chatbot ollama && docker rm ai-chatbot ollama
