This project provides a ready-to-use Docker Compose setup for running the DeepSeek-R1 language model locally, along with a simple web-based user interface.
- Local LLM Inference: Run DeepSeek-R1 models on your own hardware using Ollama.
- Web UI: Interact with the model via a browser-based interface in the
web/directory. - Easy Model Management: Pull and manage model versions with Docker commands.
- Start the Ollama Service
docker compose up -d
# Pull LLM
# Installing DeepSeekR1:1.5B and other LLM's
docker exec ollama ollama pull deepseek-r1:1.5b
docker exec ollama ollama pull deepseek-r1:7b
docker exec ollama ollama pull deepseek-coder:1.3b
docker exec ollama ollama pull llama3.2:3b
docker exec ollama ollama pull gemma2:9b
docker exec ollama ollama pull mistral:7bCheck NVIDIA
docker exec -it ollama nvidia-smiCheck if GPU is doing work:
docker exec -it ollama ollama ps| Model | Size | Speed on your 3060 |
|---|---|---|
| Llama 3.2 (3B) | ~2.0GB | Blazing Fast (Full GPU) |
| Mistral (7B) | ~4.1GB | Fast (Full GPU) |
| Llama 3.1 (8B) | ~4.7GB | Fast (Full GPU) |
| Gemma 2 (9B) | ~5.4GB | Good (Likely Full GPU) |
| Command R | 20GB+ | Slow (Mostly CPU/RAM) |
-
Pull the DeepSeek-R1 Model
Choose your desired model size (e.g.,
1.5b):docker compose exec ollama ollama pull deepseek-r1:1.5b -
Access the Web UI
- Custom UI http://localhost:6001
- Open WebUI http://localhost:6002
Open http://localhost:6002 in your browser to interact with the model.
-
Add new models
docker compose exec ollama ollama pull deepseek-r1:7b docker compose exec ollama ollama pull deepseek-coder:1.3b
Inside the Ollama container
ollama pull phi3:mini ollama pull qwen3:1.7b ollama pull mistral
.
├── docker-compose.yml # Docker Compose configuration for Ollama
├── ollama-models/ # Model storage (keys, manifests, blobs)
│ ├── id_ed25519
│ ├── id_ed25519.pub
│ └── models/
│ ├── blobs/ # Model weights and data
│ └── manifests/ # Model manifests
├── web/ # Web UI files
│ ├── index.html
│ ├── ollama.js
│ ├── showdown.min.js
│ └── style.css
└── README.md # Project documentation- Model files are stored in
ollama-models/. You can add or remove models as needed. - The web UI is static and communicates with the Ollama backend.
- For advanced configuration, edit docker-compose.yml.
- Check Ollama is runing on http://localhost:11434
- Custom Web UI is running on http://localhost:6001
- Open Web UI is running on http://localhost:6002
- Ollama Documentation
- DeepSeek-R1 Model Card
- https://dev.to/savvasstephnds/run-deepseek-locally-using-docker-2pdm
- https://platzi.com/blog/deepseek-r1-instalar-local/
- https://www.composerize.com/
