Skip to content

AI-powered multi-LLM decision board - Get expert analysis from specialized AI agents with peer review and final synthesis

License

Notifications You must be signed in to change notification settings

Poolchaos/Council

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Council - AI Software Organization Team Simulator

A local application where an AI-powered software organization team analyzes codebases and project questions. The Chief Product Officer (human user) delegates tasks to specialized AI team members who provide expert analysis from their domains, peer-review each other's findings, and deliver a synthesized verdict.

Core Concept: You are the CPO. Your AI team includes System Architects, Security Engineers, Backend Specialists, UI/UX Designers, DevOps Engineers, and more - all working together to analyze your projects.

BYOK (Bring Your Own Keys): Supports OpenAI, Gemini, Anthropic, DeepSeek, and local Ollama models.


Legal Notice & Usage Terms

License: PolyForm Noncommercial License 1.0.0

This software is free for personal, educational, and non-commercial use.

Permitted:

  • Personal projects and learning
  • Internal business analysis (non-resale)
  • Open-source contributions (non-commercial)
  • Academic research

NOT Permitted:

  • Commercial SaaS deployments
  • Selling services powered by this code
  • Reselling or redistributing for profit
  • Building commercial products without license

For commercial use, contact the author: bt1phillip@gmail.com

See LICENSE for full legal terms.


Quick Start (Docker - One Command)

Prerequisites

  • Docker Desktop installed and running
  • At least one LLM API key (or Ollama for local models)

1. Clone and Configure

git clone <repository-url>
cd Council

# Copy environment template
cp .env.example .env

# Edit .env and add your API key(s)
# At minimum, add ONE of these:
#   OPENAI_API_KEY=sk-...
#   GEMINI_API_KEY=...
#   Or use Ollama for free local models

2. Run with Docker

# Production mode (recommended)
docker compose up -d

# Wait for services to start, then open:
# Frontend: http://localhost:3000
# API Docs: http://localhost:8000/docs

3. Using Local LLMs (Ollama)

# Option A: Use your existing Ollama installation
# Set in .env: OLLAMA_BASE_URL=http://host.docker.internal:11434

# Option B: Run Ollama in Docker alongside Council
docker compose --profile ollama up -d

# Pull a model (run once)
docker exec council_ollama ollama pull llama3.2

# Set in .env: DEFAULT_PROVIDER=ollama

4. Development Mode (Hot Reload)

# Backend and frontend with hot-reload
docker compose --profile dev up

# Frontend: http://localhost:5173
# Backend:  http://localhost:8000

Supported LLM Providers

Provider Models Cost Setup
Gemini gemini-2.5-pro, gemini-2.5-flash Free tier available Get API Key
OpenAI gpt-4o, gpt-4o-mini Pay-per-use Get API Key
Anthropic claude-sonnet-4, claude-3-haiku Pay-per-use Get API Key
DeepSeek deepseek-chat Budget-friendly Get API Key
Ollama llama3.2, mistral, qwen2.5 Free (local) Install Ollama

Features

Auto-Critique Loop (Fact-Checking)

Council includes an optional fact-checking step that verifies panel analyses before the final verdict:

# Add a search API key to enable live fact-checking
TAVILY_API_KEY=tvly-...  # or
SERPER_API_KEY=...

# Without a search key, Council uses internal logic checks
# (cross-references analyses for consistency)

Optimized Prompts

Prompts are designed for efficiency across all providers including local Ollama models:

  • Minimal token usage
  • Structured JSON outputs
  • Ollama-specific compact mode for smaller context windows

Screenshots

Dashboard - Start Your Council Session

Dashboard

In-Session - Real-Time Panel Analysis

In-Session Analysis

Judge Review - Final Verdict & Follow-Up Chat

Judge Review


Project Status

Phase 1: Foundation & Backend Core - COMPLETE (190 tests passing, 96.44% coverage)

  • Docker environment configured
  • Backend skeleton with FastAPI
  • LLM Factory (OpenAI, Gemini, Anthropic, DeepSeek, Ollama)
  • Prompt engineering system
  • Orchestration engine (5-phase workflow)
  • Auto-Critique fact-checking loop
  • Database integration (SQLAlchemy + SQLite)

Phase 2: Frontend & Polish - IN PROGRESS

  • React frontend with session management
  • Real-time workflow progress display
  • Codebase upload and analysis UI

Prerequisites

  • Docker Desktop (recommended) OR
  • Python 3.11+ and Node.js 20+ (manual setup)
  • At least one LLM API key (OpenAI, Gemini, etc.) OR Ollama installed

Manual Setup (Without Docker)

1. Backend Setup

cd backend

# Create virtual environment
python -m venv .venv

# Activate (PowerShell)
.venv\Scripts\Activate.ps1

# Install dependencies
pip install -r requirements.txt

# Configure environment
cp ..\.env.example ..\.env
# Edit .env with your API keys

# Run backend
uvicorn main:app --reload --port 8000

2. Frontend Setup

cd frontend

# Install dependencies
npm install

# Run dev server
npm run dev

3. Ollama Setup (Local LLMs)

# Install Ollama from https://ollama.ai/

# Pull a model
ollama pull llama3.2

# Ollama runs automatically on port 11434
# Set DEFAULT_PROVIDER=ollama in .env

API Examples

Analyze a Codebase

# Example: Analyze a project for security vulnerabilities
curl -X POST http://localhost:8000/api/analyze-codebase \
  -H "Content-Type: application/json" \
  -d '{
    "query": "Perform a security audit of this codebase",
    "codebase_path": "C:\\Projects\\my-app",
    "focus_areas": ["authentication", "input validation", "secrets management"]
  }'

What happens:

  1. System scans your project directory (respects .gitignore)
  2. Assigns relevant specialists: Security Engineer, DevOps Engineer, Backend Specialist, System Architect
  3. Each team member analyzes codebase from their expertise
  4. Team members peer-review each other's findings
  5. CPO (Judge LLM) synthesizes insights into actionable directive

Supported File Types: .py, .js, .ts, .jsx, .tsx, .java, .cs, .cpp, .go, .rb, .php, .md, .txt

Docker Deployment Options

# Production (frontend + backend)
docker compose up -d

# Development with hot-reload
docker compose --profile dev up

# With bundled Ollama (local LLM)
docker compose --profile ollama up -d

# All services including Redis cache
docker compose --profile cache up -d

# Stop all services
docker compose down

AI Team Structure

Your organization includes 20 specialized roles across 5 units (see roles.md):

Unit Roles
Strategy & Vision Business Analyst, Data Scientist, Legal Counsel
Technical Powerhouse System Architect, Tech Lead, Backend/Frontend Specialist, Mobile Developer
Design & Experience UI/UX Designer, User Researcher, Copywriter
Quality & Reliability QA Lead, Security Engineer, DevOps Engineer
Growth & Revenue CMO, Growth Hacker, SEO Specialist, Content Strategist

Smart Team Selection: System assigns 4-8 relevant specialists per analysis based on query type.

Architecture

[React Frontend] <--(HTTP/JSON)--> [FastAPI Backend] <--(API)--> [LLM Providers]
      :3000                              :8000                    OpenAI/Gemini/Ollama
                                          |
                                     [SQLite DB]

Workflow Phases

  1. Analysis: Assemble optimal team based on query type
  2. Generation: Parallel LLM calls with specialized roles
  3. Anonymization: Strip identifiers for unbiased review
  4. Peer Review: Cross-examination of proposals
  5. Fact-Check (Optional): Verify claims with search or logic checks
  6. Judgment: CPO synthesizes final directive

Project Structure

Council/
├── backend/                # Python FastAPI backend
│   ├── services/          # LLM providers, orchestration, fact-checker
│   ├── prompts/           # System prompts (YAML)
│   ├── models/            # Database models
│   ├── tests/             # Test suite (190+ tests)
│   └── main.py            # FastAPI application
├── frontend/              # React + TypeScript + Vite
├── data/                  # SQLite database storage
├── docs/                  # Documentation
│   ├── _rules/           # Engineering standards
│   └── prompt_specs.md   # Prompt engineering guide
├── docker-compose.yml    # Container orchestration
└── .env.example          # Environment template

Testing

# Run all tests
pytest

# Run with coverage report
pytest --cov

# Run specific test file
pytest tests/test_main.py

Documentation

Security

  • API keys stored in .env only (never committed)
  • File upload validation (10MB max, allowed extensions)
  • Filename sanitization prevents path traversal
  • Local-only deployment (no external exposure required)
  • Secrets never logged or exposed to frontend

Troubleshooting

Docker Issues

# Rebuild containers
docker compose down
docker compose build --no-cache
docker compose up -d

# View logs
docker compose logs -f backend

Ollama Connection Issues

# Check if Ollama is running
curl http://localhost:11434/api/tags

# If using Docker, ensure correct URL in .env
# For Docker Desktop on Windows/Mac:
OLLAMA_BASE_URL=http://host.docker.internal:11434

Python Environment Issues

# Recreate virtual environment
rm -rf .venv
python -m venv .venv
source .venv/bin/activate  # or .venv\Scripts\Activate.ps1 on Windows
pip install -r requirements.txt

License

MIT

Contributors

Built with the Council AI team.

About

AI-powered multi-LLM decision board - Get expert analysis from specialized AI agents with peer review and final synthesis

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published