Skip to content

ellenoireQ/Chatbot

Repository files navigation

🤖 Chatbot AI - Local & Cloud AI Chat UI

A sleek chatbot interface built with Next.js 15, Tailwind CSS, Redux Toolkit, and shadcn/ui, supporting local AI inference (Ollama) and cloud-based inference (Cypher Alpha via OpenRouter).

⚠️ This project currently supports local model only (ollama).
Future support for external API integration is planned but not yet released.

Screenshots

🌞 Light Mode

🌙 Dark Mode

⚙️ System Mode

Automatically switches between light and dark based on your device’s theme.


✨ Features

  • 🔄 AI models (local Ollama)
  • 🌓 Theme toggle: Light, Dark, or System
  • 🧠 AI question suggestions
  • 🆕 Start new chat
  • Copy to clipboard support
  • 🎯 Responsive design for desktop & mobile
  • 🧩 Clean architecture using app/ directory

🧪 Tech Stack


📦 Installation

# Clone this repo
git clone https://github.com/ellenoireQ/Chatbot.git

# Enter project folder
cd Chatbot

# Install dependencies
npm install

# Make sure you have Ollama running locally
ollama run (model)

# Start the dev server
npm run dev

🧠 Setting Up Ollama (Local LLM)

This project supports local AI model inference using Ollama, allowing you to run Large Language Models (LLMs) directly on your machine without external APIs or internet access.


⚙️ Requirements

  • OS: Linux, macOS, or Windows (WSL2)
  • Architecture: x86_64 or ARM64
  • No Docker required!

📥 Installation

On Linux / macOS

curl -fsSL https://ollama.com/install.sh | sh

📦 Running

ollama run <model> # e.g llama3.2:1b

If you're installing any model please follow this steps, edit these file.

page.tsx: line number 98

  //
  //  Handling when User Click Input Button
  //
  const handleSubmit = async () => {
    setLoading(true);
    const res = await fetch("/api/generate", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        model: "llama3.2:1b", // to your model
        messages: [
          {
            content: prompt,
          },
        ],
      }),
    });
page.tsx: line number 138

  //
  //  Generate Only One Question
  //
  const handleQuestion = async () => {
    const questionPrompt = `make one question like random question about tech or anything you want, only question like "question" no anything only question. one Question!!`;
    const openai = await fetch("/api/generate", {
      method: "POST",
      body: JSON.stringify({
        model: "llama3.2:1b", // to your model
        messages: [
          {
            content: questionPrompt,
          },
        ],
      }),
    });
route.tsx: line number 5

    const ollama = await fetch("http://localhost:11434/api/generate", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({
      model: "llama3.2:1b", // to your model
      prompt: params,
      stream: false,
    }),
  });

📚 Docs Reference

🔍 References & Technologies I've Learned

🙌 What I’ve Learned

This project helped me explore:

  • Integrating Ollama as a local inference engine.
  • Managing app state using Redux Toolkit.
  • Dynamic UI rendering with Shadcn/ui & Radix-based components.
  • Handling theme switching with next-themes and Redux integration.

About

Website Custom Chatbot

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published