Skip to content

Local copilot that syncs with your Ollama installations πŸ¦™

License

Notifications You must be signed in to change notification settings

therealcyberlord/Vea

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

50 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Vea - Local AI Copilot πŸ€–

License

Learn more about Ollama at https://ollama.com/ for deploying LLMs locally

Table of Contents

Overview

Vea is a local AI copilot that seamlessly integrates with your Ollama installations. It provides a modern, web-based interface for interacting with local AI models while leveraging additional capabilities such as web search, mathematical operations, weather information, and image analysis. You can switch to closed-source models like OpenAI and Anthropic by editing the configuration file at backend/config/agent.yaml or through the web-based configuration interface. Please complete the setup on the configuration page (accessible at http://localhost:3000/configure) before submitting your first query.

Features

  • Configurable Tools - Enable/disable web search, weather, and math tools based on your needs
  • Web search capabilities powered by Tavily API
  • Real-time weather information using OpenWeather API
  • Mathematical calculations including arithmetic and trigonometric functions
  • Multi-modal vision capabilities
  • Context-aware responses with current date/time integration
  • Modular architecture with LangGraph support and LangSmith observability
  • Support markdown for programming languages
  • Visible thinking traces when using a reasoning model
  • Web-based configuration interface for model and tool selection

Tech Stack

  • Backend: FastAPI, LangGraph, LangSmith, and Ollama
  • Frontend: React, TypeScript, TailwindCSS, and Vite

Prerequisites

To install and run Vea, ensure you have the following:

  • Tavily API key
  • OpenWeather API key
  • Ollama installed and running

Installation

  1. Clone the repository:
git clone https://github.com/therealcyberlord/Vea.git
cd Vea
  1. Install Python dependencies using UV:
cd backend
uv sync
source .venv/bin/activate
  1. Install frontend dependencies:
cd frontend
npm install

Configuration

Create a .env file in the backend directory with the following configuration:

TAVILY_API_KEY=your_tavily_api_key
OPENWEATHER_API_KEY=your_openweather_api_key
LANGSMITH_API_KEY=your_langsmith_api_key
LANGSMITH_TRACE=true

You can obtain API keys from:

Tools Configuration

Vea supports configurable tools that can be enabled or disabled based on your needs. The backend/config/agent.yaml file contains the following configuration:

llm_config:
  tool_llm:
    name: qwen3:4b
    provider: ollama
    temperature: 0.3
  vision_llm:
    name: gemma3:12b
    provider: ollama
    temperature: 0.2
tools:
  web_search: false
  weather: true
  math: false
  • web_search: Enable/disable web search capabilities (requires Tavily API key)
  • weather: Enable/disable weather information lookup (requires OpenWeather API key)
  • math: Enable/disable mathematical calculation tools

You can also configure your models through the web interface at http://localhost:3000/configure.

Usage

  1. Start the FastAPI backend server:
cd backend
uvicorn main:app --reload
  1. Start the frontend development server:
cd frontend
npm run dev

The application will be accessible at http://localhost:3000

Project Structure

Vea/
β”œβ”€β”€ backend/           # Backend implementation
β”‚   β”œβ”€β”€ agent/         # Main AI agent
β”‚   β”œβ”€β”€ tools/         # Tools for function-calling
β”‚   β”œβ”€β”€ config/        # Configuration files
β”‚   β”œβ”€β”€ utils/         # Utility functions
β”‚   β”œβ”€β”€ models/        # Pydantic data models
β”‚   └── .env           # Environment variables
β”œβ”€β”€ frontend/          # Frontend application
β”‚   β”œβ”€β”€ src/           # TypeScript source code
β”‚   β”œβ”€β”€ public/        # Static assets
β”‚   └── vite.config.ts # Vite configuration

Screenshots

Here's a look at Vea's chat interface:

Landing Page

The configuration page allows you to customize your models:

Configuration Page

Vea supports image inputs:

Image Question

It also supports coding examples with markdown:

Coding Example

You can view the AI's thinking process if you are using a reasoning model:

Thinking Trace

Logging

Vea implements comprehensive logging for both development and production environments:

  • Console Output: INFO level and above messages are displayed in the console
  • File Logging: Detailed DEBUG level logs are written to app.log with rotation
  • Log Rotation: Log files are automatically rotated when they reach 10MB, with up to 5 backup files
  • Structured Format: All logs follow the format: timestamp - logger_name - level - message

The logging configuration can be customized by modifying backend/config/logging.conf.

Limitations

  • Memory persistence is stored in-memory and is not persisted to an external database
  • Vision capabilities depend on the underlying model's capabilities
  • Temperature configuration functionality is currently a work-in-progress
  • Better handling of long-term memory

Contributing

Feel free to contribute to Vea! Whether it's adding new features, improving existing ones, or fixing bugs, your contributions are welcome. Please feel free to submit a pull request or open an issue for any enhancements you'd like to see.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

Local copilot that syncs with your Ollama installations πŸ¦™

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published