A local application where an AI-powered software organization team analyzes codebases and project questions. The Chief Product Officer (human user) delegates tasks to specialized AI team members who provide expert analysis from their domains, peer-review each other's findings, and deliver a synthesized verdict.
Core Concept: You are the CPO. Your AI team includes System Architects, Security Engineers, Backend Specialists, UI/UX Designers, DevOps Engineers, and more - all working together to analyze your projects.
BYOK (Bring Your Own Keys): Supports OpenAI, Gemini, Anthropic, DeepSeek, and local Ollama models.
License: PolyForm Noncommercial License 1.0.0
This software is free for personal, educational, and non-commercial use.
Permitted:
- Personal projects and learning
- Internal business analysis (non-resale)
- Open-source contributions (non-commercial)
- Academic research
NOT Permitted:
- Commercial SaaS deployments
- Selling services powered by this code
- Reselling or redistributing for profit
- Building commercial products without license
For commercial use, contact the author: bt1phillip@gmail.com
See LICENSE for full legal terms.
- Docker Desktop installed and running
- At least one LLM API key (or Ollama for local models)
git clone <repository-url>
cd Council
# Copy environment template
cp .env.example .env
# Edit .env and add your API key(s)
# At minimum, add ONE of these:
# OPENAI_API_KEY=sk-...
# GEMINI_API_KEY=...
# Or use Ollama for free local models# Production mode (recommended)
docker compose up -d
# Wait for services to start, then open:
# Frontend: http://localhost:3000
# API Docs: http://localhost:8000/docs# Option A: Use your existing Ollama installation
# Set in .env: OLLAMA_BASE_URL=http://host.docker.internal:11434
# Option B: Run Ollama in Docker alongside Council
docker compose --profile ollama up -d
# Pull a model (run once)
docker exec council_ollama ollama pull llama3.2
# Set in .env: DEFAULT_PROVIDER=ollama# Backend and frontend with hot-reload
docker compose --profile dev up
# Frontend: http://localhost:5173
# Backend: http://localhost:8000| Provider | Models | Cost | Setup |
|---|---|---|---|
| Gemini | gemini-2.5-pro, gemini-2.5-flash | Free tier available | Get API Key |
| OpenAI | gpt-4o, gpt-4o-mini | Pay-per-use | Get API Key |
| Anthropic | claude-sonnet-4, claude-3-haiku | Pay-per-use | Get API Key |
| DeepSeek | deepseek-chat | Budget-friendly | Get API Key |
| Ollama | llama3.2, mistral, qwen2.5 | Free (local) | Install Ollama |
Council includes an optional fact-checking step that verifies panel analyses before the final verdict:
# Add a search API key to enable live fact-checking
TAVILY_API_KEY=tvly-... # or
SERPER_API_KEY=...
# Without a search key, Council uses internal logic checks
# (cross-references analyses for consistency)Prompts are designed for efficiency across all providers including local Ollama models:
- Minimal token usage
- Structured JSON outputs
- Ollama-specific compact mode for smaller context windows
Phase 1: Foundation & Backend Core - COMPLETE (190 tests passing, 96.44% coverage)
- Docker environment configured
- Backend skeleton with FastAPI
- LLM Factory (OpenAI, Gemini, Anthropic, DeepSeek, Ollama)
- Prompt engineering system
- Orchestration engine (5-phase workflow)
- Auto-Critique fact-checking loop
- Database integration (SQLAlchemy + SQLite)
Phase 2: Frontend & Polish - IN PROGRESS
- React frontend with session management
- Real-time workflow progress display
- Codebase upload and analysis UI
- Docker Desktop (recommended) OR
- Python 3.11+ and Node.js 20+ (manual setup)
- At least one LLM API key (OpenAI, Gemini, etc.) OR Ollama installed
cd backend
# Create virtual environment
python -m venv .venv
# Activate (PowerShell)
.venv\Scripts\Activate.ps1
# Install dependencies
pip install -r requirements.txt
# Configure environment
cp ..\.env.example ..\.env
# Edit .env with your API keys
# Run backend
uvicorn main:app --reload --port 8000cd frontend
# Install dependencies
npm install
# Run dev server
npm run dev# Install Ollama from https://ollama.ai/
# Pull a model
ollama pull llama3.2
# Ollama runs automatically on port 11434
# Set DEFAULT_PROVIDER=ollama in .env# Example: Analyze a project for security vulnerabilities
curl -X POST http://localhost:8000/api/analyze-codebase \
-H "Content-Type: application/json" \
-d '{
"query": "Perform a security audit of this codebase",
"codebase_path": "C:\\Projects\\my-app",
"focus_areas": ["authentication", "input validation", "secrets management"]
}'What happens:
- System scans your project directory (respects .gitignore)
- Assigns relevant specialists: Security Engineer, DevOps Engineer, Backend Specialist, System Architect
- Each team member analyzes codebase from their expertise
- Team members peer-review each other's findings
- CPO (Judge LLM) synthesizes insights into actionable directive
Supported File Types: .py, .js, .ts, .jsx, .tsx, .java, .cs, .cpp, .go, .rb, .php, .md, .txt
# Production (frontend + backend)
docker compose up -d
# Development with hot-reload
docker compose --profile dev up
# With bundled Ollama (local LLM)
docker compose --profile ollama up -d
# All services including Redis cache
docker compose --profile cache up -d
# Stop all services
docker compose downYour organization includes 20 specialized roles across 5 units (see roles.md):
| Unit | Roles |
|---|---|
| Strategy & Vision | Business Analyst, Data Scientist, Legal Counsel |
| Technical Powerhouse | System Architect, Tech Lead, Backend/Frontend Specialist, Mobile Developer |
| Design & Experience | UI/UX Designer, User Researcher, Copywriter |
| Quality & Reliability | QA Lead, Security Engineer, DevOps Engineer |
| Growth & Revenue | CMO, Growth Hacker, SEO Specialist, Content Strategist |
Smart Team Selection: System assigns 4-8 relevant specialists per analysis based on query type.
[React Frontend] <--(HTTP/JSON)--> [FastAPI Backend] <--(API)--> [LLM Providers]
:3000 :8000 OpenAI/Gemini/Ollama
|
[SQLite DB]
- Analysis: Assemble optimal team based on query type
- Generation: Parallel LLM calls with specialized roles
- Anonymization: Strip identifiers for unbiased review
- Peer Review: Cross-examination of proposals
- Fact-Check (Optional): Verify claims with search or logic checks
- Judgment: CPO synthesizes final directive
Council/
├── backend/ # Python FastAPI backend
│ ├── services/ # LLM providers, orchestration, fact-checker
│ ├── prompts/ # System prompts (YAML)
│ ├── models/ # Database models
│ ├── tests/ # Test suite (190+ tests)
│ └── main.py # FastAPI application
├── frontend/ # React + TypeScript + Vite
├── data/ # SQLite database storage
├── docs/ # Documentation
│ ├── _rules/ # Engineering standards
│ └── prompt_specs.md # Prompt engineering guide
├── docker-compose.yml # Container orchestration
└── .env.example # Environment template
# Run all tests
pytest
# Run with coverage report
pytest --cov
# Run specific test file
pytest tests/test_main.py- Prompt Specifications - LLM prompt engineering
- Core Rules - Engineering standards
- Project Rules - Project-specific constraints
- API keys stored in
.envonly (never committed) - File upload validation (10MB max, allowed extensions)
- Filename sanitization prevents path traversal
- Local-only deployment (no external exposure required)
- Secrets never logged or exposed to frontend
# Rebuild containers
docker compose down
docker compose build --no-cache
docker compose up -d
# View logs
docker compose logs -f backend# Check if Ollama is running
curl http://localhost:11434/api/tags
# If using Docker, ensure correct URL in .env
# For Docker Desktop on Windows/Mac:
OLLAMA_BASE_URL=http://host.docker.internal:11434# Recreate virtual environment
rm -rf .venv
python -m venv .venv
source .venv/bin/activate # or .venv\Scripts\Activate.ps1 on Windows
pip install -r requirements.txtMIT
Built with the Council AI team.


