Skip to content

sorcware/fireball

Repository files navigation

testing commits

Fireball - Your Personal AI Assistant

Welcome to Fireball, your very own Jarvis-inspired AI assistant! This project aims to create a powerful, extensible AI system that can understand and respond to your questions while gradually building up capabilities through tools and worker agents.

Project Overview

Fireball is designed to be a personal AI assistant that:

  • Connects to local LLMs for natural language understanding and generation
  • Responds to user questions in a conversational manner
  • Scales from simple question answering to complex multi-agent workflows
  • Provides a foundation for building sophisticated AI tools and systems

🧠 Local LLM Assistant with UV

A simple yet powerful local AI assistant that leverages a locally hosted LLM and integrates tools such as weather lookup, task management, and folder search. Built using Python, with package management handled via UV.

📌 Overview

This project allows you to interact with a local large language model (LLM) through a terminal-based interface. It supports tool integration for tasks like fetching current weather, managing tasks, or locating folders on your machine.

Features

  • ✅ Communicates with local LLM via OpenAI-compatible API
  • ✅ Tool support: get_weather, find_folder, add_task
  • ✅ Easy-to-use CLI interface with rich formatting
  • ✅ Uses UV for fast, modern package management
  • ✅ Supports multiple local LLM providers (Ollama, LM Studio, etc.)

🛠️ Prerequisites

Before running this project, make sure you have:

  • A local LLM server running with OpenAI-compatible API:
    • Ollama - Recommended for ease of use
    • LM Studio - GUI-based model management
    • Or any other OpenAI-compatible local LLM server
  • UV installed for package management

📦 Installation

  1. Clone or download this repository.
  2. Install dependencies using UV:
uv sync

This command will install all project dependencies listed in pyproject.toml.

  1. Ensure your local LLM server is running:

    • Ollama: ollama serve (default: http://localhost:11434)
    • LM Studio: Start the server in LM Studio (default: http://127.0.0.1:1234)
  2. Update the API endpoint in your configuration if needed to match your LLM server.

▶️ Usage

Run the application using:

uv run main.py

Then, you can interact with the LLM by typing prompts directly into the terminal.

Example Tools

You can use the following tools in your prompts:

  • get_weather: Get current weather for a location.
    • Example prompt: "What's the weather like in London?"
  • find_folder: Find folders matching a name.
    • Example prompt: "Find the devin folder"
  • add_task: Add a task to the database with various details.
    • Example prompt: "Add a task called task1 with priority high"

🧪 Development

To add new tools or modify existing ones:

  1. Add your function to tools.py.
  2. Register it in ToolExecutor class under self.tools.
  3. Update the system prompt in config.py if needed.

🔧 Configuration

The application supports different local LLM providers through configuration:

  • Default configuration works with Ollama out of the box
  • Easily adaptable to LM Studio, vLLM, or other OpenAI-compatible servers
  • Modify API endpoints and model names in the configuration as needed

📄 License

This project is licensed under the GNU General Public License v3.0 – see the LICENSE file for details.


Roadmap

Phase 1: Core Functionality

  • Basic LLM connection and response
  • Conversation history management
  • Enhanced response formatting

Phase 2: Tool Integration

  • API capabilities
  • File system operations
  • Database interactions

License

GNU General Public License v3.0 - Non-commercial / Not for profit


About

Our first stab at an AI Agent.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages