AI-powered meeting intelligence extractor using Whisper + Llama.
Transform meeting recordings into actionable insights: decisions, action items, blockers, and open questions — all with confidence scores.
- 🎤 Transcription via Whisper (local, GPU-accelerated)
- 🧠 Extraction via Llama/Gemma (local, private)
- 📊 Confidence scoring for each extracted item
- 🎯 Multiple outputs: JSON, Markdown, pretty tables
- 🚀 Fast: RTX 4070 Ti processes 1hr meeting in ~10 min
# Clone the repo
git clone https://github.com/aacodex401/meetwise-cli.git
cd meetwise-cli
# Install with uv (recommended)
uv pip install -e .
# Or with pip
pip install -e .- Whisper.cpp installed at
C:\tools\whisper.cppwith models - Ollama running with
gemma3:12bmodel - Python 3.10+
# Process audio and extract insights
meetwise process meeting.mp3 -o insights.json
# With options
meetwise process meeting.mp3 -m medium -f markdown# 1. Transcribe audio
meetwise transcribe meeting.mp3 -o transcript.txt
# 2. Extract insights from transcript
meetwise analyze transcript.txt -o insights.json# JSON (default) - full structured output
meetwise process meeting.mp3 -f json
# Markdown - human-readable report
meetwise process meeting.mp3 -f markdown
# Pretty tables - colorful terminal output
meetwise process meeting.mp3 -f pretty{
"decisions": [
{
"id": "dec_001",
"decision": "Adopt microservices architecture",
"mentioned_by": "Sarah",
"confidence": 0.95
}
],
"action_items": [
{
"id": "act_001",
"action": "Create RFC for database migration",
"owner": "Mike",
"due_date": "2026-02-21",
"priority": "high",
"confidence": 0.88
}
],
"blockers": [
{
"id": "blk_001",
"blocker": "Waiting on legal approval",
"severity": "high",
"confidence": 0.92
}
],
"open_questions": [
{
"id": "qst_001",
"question": "Should we support legacy API?",
"directed_to": "Product Team",
"answered": false,
"confidence": 0.85
}
]
}| Score | Meaning | Example |
|---|---|---|
| 0.9+ | Explicit statement | "I'll do X by Friday" |
| 0.7-0.9 | Strong implication | "Let's go with option B" |
| 0.4-0.6 | Weak signal | "Maybe we should consider..." |
| <0.4 | Ambiguous | Unclear context |
Default paths (Windows):
- Whisper CLI:
C:\tools\whisper.cpp\Release\whisper-cli.exe - Whisper models:
C:\tools\whisper.cpp\models\ - Ollama:
http://127.0.0.1:11434
# Install dev dependencies
uv pip install -e ".[dev]"
# Run tests
pytest
# Lint
ruff check .MIT
Anderson Araújo (@a45xxx)