Learn more about Ollama at https://ollama.com/ for deploying LLMs locally
- Overview
- Features
- Prerequisites
- Installation
- Configuration
- Usage
- Project Structure
- Logging
- Limitations
- License
Vea is a local AI copilot that seamlessly integrates with your Ollama installations. It provides a modern, web-based interface for interacting with local AI models while leveraging additional capabilities such as web search, mathematical operations, weather information, and image analysis. You can switch to closed-source models like OpenAI and Anthropic by editing the configuration file at backend/config/agent.yaml or through the web-based configuration interface. Please complete the setup on the configuration page (accessible at http://localhost:3000/configure) before submitting your first query.
- Configurable Tools - Enable/disable web search, weather, and math tools based on your needs
- Web search capabilities powered by Tavily API
- Real-time weather information using OpenWeather API
- Mathematical calculations including arithmetic and trigonometric functions
- Multi-modal vision capabilities
- Context-aware responses with current date/time integration
- Modular architecture with LangGraph support and LangSmith observability
- Support markdown for programming languages
- Visible thinking traces when using a reasoning model
- Web-based configuration interface for model and tool selection
- Backend: FastAPI, LangGraph, LangSmith, and Ollama
- Frontend: React, TypeScript, TailwindCSS, and Vite
To install and run Vea, ensure you have the following:
- Tavily API key
- OpenWeather API key
- Ollama installed and running
- Clone the repository:
git clone https://github.com/therealcyberlord/Vea.git
cd Vea- Install Python dependencies using UV:
cd backend
uv sync
source .venv/bin/activate- Install frontend dependencies:
cd frontend
npm installCreate a .env file in the backend directory with the following configuration:
TAVILY_API_KEY=your_tavily_api_key
OPENWEATHER_API_KEY=your_openweather_api_key
LANGSMITH_API_KEY=your_langsmith_api_key
LANGSMITH_TRACE=true
You can obtain API keys from:
- Tavily API: https://tavily.com/
- OpenWeather API: https://openweathermap.org/api
- LangSmith: https://www.langchain.com/langsmith
Vea supports configurable tools that can be enabled or disabled based on your needs. The backend/config/agent.yaml file contains the following configuration:
llm_config:
tool_llm:
name: qwen3:4b
provider: ollama
temperature: 0.3
vision_llm:
name: gemma3:12b
provider: ollama
temperature: 0.2
tools:
web_search: false
weather: true
math: false- web_search: Enable/disable web search capabilities (requires Tavily API key)
- weather: Enable/disable weather information lookup (requires OpenWeather API key)
- math: Enable/disable mathematical calculation tools
You can also configure your models through the web interface at http://localhost:3000/configure.
- Start the FastAPI backend server:
cd backend
uvicorn main:app --reload- Start the frontend development server:
cd frontend
npm run devThe application will be accessible at http://localhost:3000
Vea/
βββ backend/ # Backend implementation
β βββ agent/ # Main AI agent
β βββ tools/ # Tools for function-calling
β βββ config/ # Configuration files
β βββ utils/ # Utility functions
β βββ models/ # Pydantic data models
β βββ .env # Environment variables
βββ frontend/ # Frontend application
β βββ src/ # TypeScript source code
β βββ public/ # Static assets
β βββ vite.config.ts # Vite configuration
Here's a look at Vea's chat interface:
The configuration page allows you to customize your models:
Vea supports image inputs:
It also supports coding examples with markdown:
You can view the AI's thinking process if you are using a reasoning model:
Vea implements comprehensive logging for both development and production environments:
- Console Output: INFO level and above messages are displayed in the console
- File Logging: Detailed DEBUG level logs are written to
app.logwith rotation - Log Rotation: Log files are automatically rotated when they reach 10MB, with up to 5 backup files
- Structured Format: All logs follow the format:
timestamp - logger_name - level - message
The logging configuration can be customized by modifying backend/config/logging.conf.
- Memory persistence is stored in-memory and is not persisted to an external database
- Vision capabilities depend on the underlying model's capabilities
- Temperature configuration functionality is currently a work-in-progress
- Better handling of long-term memory
Feel free to contribute to Vea! Whether it's adding new features, improving existing ones, or fixing bugs, your contributions are welcome. Please feel free to submit a pull request or open an issue for any enhancements you'd like to see.
This project is licensed under the MIT License - see the LICENSE file for details.




