Skip to content

AI-powered meeting intelligence extractor using Whisper + Llama

License

Notifications You must be signed in to change notification settings

aacodex401/meetwise-cli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🎙️ MeetWise CLI

AI-powered meeting intelligence extractor using Whisper + Llama.

Transform meeting recordings into actionable insights: decisions, action items, blockers, and open questions — all with confidence scores.

Features

  • 🎤 Transcription via Whisper (local, GPU-accelerated)
  • 🧠 Extraction via Llama/Gemma (local, private)
  • 📊 Confidence scoring for each extracted item
  • 🎯 Multiple outputs: JSON, Markdown, pretty tables
  • 🚀 Fast: RTX 4070 Ti processes 1hr meeting in ~10 min

Installation

# Clone the repo
git clone https://github.com/aacodex401/meetwise-cli.git
cd meetwise-cli

# Install with uv (recommended)
uv pip install -e .

# Or with pip
pip install -e .

Prerequisites

  • Whisper.cpp installed at C:\tools\whisper.cpp with models
  • Ollama running with gemma3:12b model
  • Python 3.10+

Usage

Full Pipeline (Recommended)

# Process audio and extract insights
meetwise process meeting.mp3 -o insights.json

# With options
meetwise process meeting.mp3 -m medium -f markdown

Step-by-Step

# 1. Transcribe audio
meetwise transcribe meeting.mp3 -o transcript.txt

# 2. Extract insights from transcript
meetwise analyze transcript.txt -o insights.json

Output Formats

# JSON (default) - full structured output
meetwise process meeting.mp3 -f json

# Markdown - human-readable report
meetwise process meeting.mp3 -f markdown

# Pretty tables - colorful terminal output
meetwise process meeting.mp3 -f pretty

Output Example

{
  "decisions": [
    {
      "id": "dec_001",
      "decision": "Adopt microservices architecture",
      "mentioned_by": "Sarah",
      "confidence": 0.95
    }
  ],
  "action_items": [
    {
      "id": "act_001",
      "action": "Create RFC for database migration",
      "owner": "Mike",
      "due_date": "2026-02-21",
      "priority": "high",
      "confidence": 0.88
    }
  ],
  "blockers": [
    {
      "id": "blk_001",
      "blocker": "Waiting on legal approval",
      "severity": "high",
      "confidence": 0.92
    }
  ],
  "open_questions": [
    {
      "id": "qst_001",
      "question": "Should we support legacy API?",
      "directed_to": "Product Team",
      "answered": false,
      "confidence": 0.85
    }
  ]
}

Confidence Scoring

Score Meaning Example
0.9+ Explicit statement "I'll do X by Friday"
0.7-0.9 Strong implication "Let's go with option B"
0.4-0.6 Weak signal "Maybe we should consider..."
<0.4 Ambiguous Unclear context

Configuration

Default paths (Windows):

  • Whisper CLI: C:\tools\whisper.cpp\Release\whisper-cli.exe
  • Whisper models: C:\tools\whisper.cpp\models\
  • Ollama: http://127.0.0.1:11434

Development

# Install dev dependencies
uv pip install -e ".[dev]"

# Run tests
pytest

# Lint
ruff check .

License

MIT

Author

Anderson Araújo (@a45xxx)

About

AI-powered meeting intelligence extractor using Whisper + Llama

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages