A sleek chatbot interface built with Next.js 15, Tailwind CSS, Redux Toolkit, and shadcn/ui, supporting local AI inference (Ollama) and cloud-based inference (Cypher Alpha via OpenRouter).
⚠️ This project currently supports local model only (ollama).
Future support for external API integration is planned but not yet released.
Automatically switches between light and dark based on your device’s theme.
- 🔄 AI models (local Ollama)
- 🌓 Theme toggle: Light, Dark, or System
- 🧠 AI question suggestions
- 🆕 Start new chat
- ✅ Copy to clipboard support
- 🎯 Responsive design for desktop & mobile
- 🧩 Clean architecture using
app/directory
- Next.js 15
- TypeScript
- Tailwind CSS
- shadcn/ui
- Redux Toolkit
- Ollama (local LLM)
- OpenRouter (used but not implemented)
# Clone this repo
git clone https://github.com/ellenoireQ/Chatbot.git
# Enter project folder
cd Chatbot
# Install dependencies
npm install
# Make sure you have Ollama running locally
ollama run (model)
# Start the dev server
npm run devThis project supports local AI model inference using Ollama, allowing you to run Large Language Models (LLMs) directly on your machine without external APIs or internet access.
- OS: Linux, macOS, or Windows (WSL2)
- Architecture: x86_64 or ARM64
- No Docker required!
curl -fsSL https://ollama.com/install.sh | shollama run <model> # e.g llama3.2:1bIf you're installing any model please follow this steps, edit these file.
page.tsx: line number 98
//
// Handling when User Click Input Button
//
const handleSubmit = async () => {
setLoading(true);
const res = await fetch("/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "llama3.2:1b", // to your model
messages: [
{
content: prompt,
},
],
}),
});page.tsx: line number 138
//
// Generate Only One Question
//
const handleQuestion = async () => {
const questionPrompt = `make one question like random question about tech or anything you want, only question like "question" no anything only question. one Question!!`;
const openai = await fetch("/api/generate", {
method: "POST",
body: JSON.stringify({
model: "llama3.2:1b", // to your model
messages: [
{
content: questionPrompt,
},
],
}),
});route.tsx: line number 5
const ollama = await fetch("http://localhost:11434/api/generate", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
model: "llama3.2:1b", // to your model
prompt: params,
stream: false,
}),
});- 🧠 Ollama – Local AI model runtime.
- 📦 Redux Toolkit – State management.
- 🌗 next-themes – Theme switching.
- 🧩 Shadcn/ui – UI component library (Radix UI based).
- ⚡ Lucide – Icon set used in the UI.
This project helped me explore:
- Integrating Ollama as a local inference engine.
- Managing app state using Redux Toolkit.
- Dynamic UI rendering with Shadcn/ui & Radix-based components.
- Handling theme switching with next-themes and Redux integration.

