TaskTriage - ethically sourced optimism for your productivity.
You know that feeling when you write a great handwritten to-do list and then... don't know what to do first, or worse don't actually do any of it? This CLI tool uses Claude AI to analyze your handwritten task notes and reveal what actually got done (and why) based on GTD principles. Think of it as a reality check for your optimistic planning habits.
You might have this feeling too: You write a semi-disorganized list(s) of daily tasks by hand or in some digital format to keep you on track everyday. For extra safety, maybe those notes get synced to either a mounted drive or Google Drive, but that's kind of where it ends. You end up maybe just prioritizing wrong and then have a pile of old notebooks with important information on a shelf collecting dust. Well, TaskTriage is here to then swoop in, find your latest scribbles, and uses Claude AI (via LangChain) to do four things:
- Daily Analysis: Analyzes your end-of-day task list to assess what you actually completed, abandoned, or left incomplete. You get insights into execution patterns, priority alignment, energy management, and workload realism. No more wondering why those 47 things didn't get done.
- Weekly Analysis: Looks back at your week's worth of daily analyses to spot patterns, figure out where things went sideways, and generate strategies to fix your planning approach. It's like a retrospective, but with less corporate speak.
- Monthly Analysis: Synthesizes your entire month's worth of weekly analyses to identify long-term patterns, assess strategic accomplishments, and craft high-level guidance for next month's planning and execution strategy.
- Annual Analysis: Analyzes all 12 months of strategic insights to identify year-long accomplishments, skill development, and high-impact opportunities for the year ahead.
- Handles text files (.txt), images (.png, .jpg, .jpeg, .gif, .webp), and PDFs (.pdf)
- Extracts text from your handwritten notes using Claude's vision API—yes, even your terrible handwriting (including multi-page PDFs)
- Two-step workflow: Sync first to copy and convert files, then Analyze when you're ready
- Sync operation: Copies raw notes from input directories and converts images/PDFs to editable
.raw_notes.txtfiles using Claude's vision API - Smart re-analysis: Detects when notes files are edited after their initial analysis and automatically includes them for re-analysis, replacing old analyses
- Multi-source reading: Works with any combination of local directories, USB devices, and Google Drive simultaneously
- Tweak Claude's model parameters via a simple YAML file
- GTD-based execution analysis with workload realism checks against healthy limits of 6-7 hours of focused work per day (because burnout is bad, actually)
- Temporal hierarchy: Daily analyses are on-demand; Weekly → Monthly → Annual analyses auto-trigger when conditions are met
- Auto-triggers weekly analyses when you have 5+ weekday analyses or when the work week has passed
- Auto-triggers monthly analyses when you have 4+ weekly analyses or when the calendar month has ended
- Auto-triggers annual analyses when you have 12 monthly analyses or when the calendar year has ended with at least 1 monthly analysis
- Shell alias so you can just type
triageinstead of the full command - Web Interface: A Streamlit UI for browsing, editing, creating, and triaging your notes visually
Note: Works especially well when paired with a note-taking device (reMarkable, Supernote, etc.). Since it works with images and PDFs, you can take a photo of your handwritten notes, scan documents, or export PDFs from your note-taking app and analyze those!
You'll need:
- Python 3.10 or newer
- uv (recommended) or plain old pip
- Task (optional but makes your life easier)
- An Anthropic API key (this is where Claude lives)
# Full first-time setup (creates venv, installs deps, copies .env template)
task setup
# Edit .env with your API key and notes directory
nano .env # or your preferred editor
# Activate the virtual environment
source .venv/bin/activate
# Add shell alias (optional but highly recommended)
task alias
source ~/.bashrc # or ~/.zshrc if you're a zsh personIf you really want to do it yourself:
pip install -e .
cp .env.template .env
# Edit .env with your settingsFirst things first: copy the .env.template file to .env and fill in your details.
cp .env.template .envTaskTriage can read notes from multiple input sources simultaneously. Configure at least one:
If you're syncing notes to a USB drive or mounted device from your reMarkable or Supernote:
# Path to the mounted note-taking device directory
EXTERNAL_INPUT_DIR=/path/to/your/usb/notes/directoryAdd an additional local directory to check for notes files:
# Path to local hard drive notes directory (optional)
LOCAL_INPUT_DIR=/path/to/your/local/notes/directoryIf your notes live in Google Drive, check out the Google Drive Setup section below. Fair warning: it's a bit involved.
TaskTriage automatically checks ALL configured input directories when looking for notes files. If you have both EXTERNAL_INPUT_DIR and LOCAL_INPUT_DIR configured, it will:
- Search both directories for unanalyzed notes
- Deduplicate files by timestamp (if the same timestamp appears in multiple locations, only the first one found is processed)
- Collect unique notes from all sources for analysis
This means you can have notes in multiple locations and TaskTriage will find them all.
By default, TaskTriage is set to "auto" mode for OUTPUT (creating new files in the UI):
# Options: auto, usb, gdrive
NOTES_SOURCE=autoIn auto mode, new files created in the UI are saved to:
EXTERNAL_INPUT_DIR(if available)LOCAL_INPUT_DIR(if USB not available)- Google Drive (if neither local directory is available)
You'll need an API key from Anthropic. Get one at https://console.anthropic.com/ and drop it in:
ANTHROPIC_API_KEY=your-api-key-hereWant to tweak how Claude thinks? Edit config.yaml:
model: claude-haiku-4-5-20241022
temperature: 0.7
max_tokens: 4096
top_p: 1.0TaskTriage uses OAuth 2.0 to access your Google Drive, giving you full read/write access to your personal Google account without the limitations of service accounts.
Don't skip these or you'll get an OAuth error:
-
Register the redirect URI in Google Cloud Console:
- Must be:
http://localhost:8501(with port number) - Add this in "OAuth client ID" → "Authorized redirect URIs"
- Must be:
-
Add yourself as a test user:
- In "OAuth consent screen" → "Test users"
- Add your Google account email
-
Wait for Google to cache the settings:
- Wait 3-5 minutes after configuring OAuth
- Then restart Streamlit and try again
- Head to the Google Cloud Console
- Click "Select a project" → "New Project"
- Name it something like "TaskTriage" and click "Create"
- In your project, navigate to "APIs & Services" → "Library"
- Search for "Google Drive API"
- Click on it and click the "Enable" button
- Go to "APIs & Services" → "OAuth consent screen"
- Select "External" user type (unless you have Google Workspace)
- Fill in application details:
- App name: "TaskTriage"
- User support email: your email
- Developer contact: your email
- Click "Save and Continue"
- Add Scopes:
- Click "Add or Remove Scopes"
- Search for "Google Drive API"
- Select:
https://www.googleapis.com/auth/drive - Click "Update" then "Save and Continue"
- Add Test Users (for development):
- Add your Google account email as a test user
- Click "Save and Continue"
- Go to "APIs & Services" → "Credentials"
- Click "Create Credentials" → "OAuth client ID"
- Select "Web application"
- Name: "TaskTriage Web Client"
- Under "Authorized redirect URIs":
- Click "Add URI"
- Enter:
http://localhost:8501 - IMPORTANT: This must match exactly (including the port number)
- Click "Create"
- Copy and save:
- Copy the Client ID (looks like:
xxx.apps.googleusercontent.com) - Copy the Client Secret (looks like:
GOCSPX-xxx) - You'll need these in the next step
- Copy the Client ID (looks like:
Add these to your .env file:
# OAuth 2.0 credentials from Google Cloud Console
GOOGLE_OAUTH_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_OAUTH_CLIENT_SECRET=your-client-secret
# Google Drive folder ID
GOOGLE_DRIVE_FOLDER_ID=your-folder-id-here
# Local directory to save analysis output (always required)
LOCAL_OUTPUT_DIR=/path/to/your/output/local/directory- Create a folder in Google Drive for your notes (e.g., "TaskTriageNotes")
- Inside that folder, create subfolders:
daily,weekly,monthly,annual - Get the folder ID from the URL:
https://drive.google.com/drive/folders/FOLDER_ID_HERE - Add the folder ID to your
.envfile
- Launch TaskTriage UI:
task ui - Open the "Configuration" expander
- Under "Google Drive (OAuth 2.0)", you'll see "Not authenticated"
- Enter your OAuth Client ID and Client Secret
- Click "🔐 Sign in with Google"
- Follow the OAuth flow in your browser
- Grant permissions to TaskTriage
- You'll be redirected back to Streamlit with "Authenticated with Google Drive"
- OAuth tokens are stored encrypted in
~/.tasktriage/oauth_tokens.json - Tokens automatically refresh when expired
- You can revoke access anytime at Google Account Settings
- Full read/write access to Google Drive (no storage quota limitations)
- Can sync analysis files directly to Google Drive via the Sync button
Solution: You need to register the redirect URI in Google Cloud Console:
- Go to Google Cloud Console
- Select your TaskTriage project
- Go to "APIs & Services" → "Credentials"
- Find your "TaskTriage Web Client" OAuth credential
- Click on it to edit
- Under "Authorized redirect URIs", verify that
http://localhost:8501is listed - If not, click "Add URI" and add:
http://localhost:8501 - Click "Save"
- Restart your Streamlit app and try signing in again
Note: The redirect URI must include the exact port number (8501). If you're running Streamlit on a different port, update the URI accordingly (e.g., http://localhost:8502).
Solution: The redirect URI in the OAuth configuration doesn't match. Check:
- In Google Cloud Console, confirm the exact redirect URI registered (should be
http://localhost:8501) - Verify Streamlit is running on port 8501 (check the URL in your browser)
- If running on a different port, either:
- Change the port in Google Cloud Console to match, OR
- Modify the
redirect_uriinstreamlit_app.pyline 703
This error has been fixed in the current version. If you encounter it:
- Hard refresh your browser (Ctrl+Shift+R or Cmd+Shift+R)
- Clear your browser cookies
- Try signing in again
The OAuth flow should now work reliably on localhost without CSRF state validation issues.
- Ensure you're listed as a test user in the OAuth consent screen
- Wait a few minutes after configuring OAuth (Google caches settings)
- Try in an incognito/private browser window
- Clear your browser cache and cookies
Your Google Drive folder should look like this (notes only, no analysis files):
TaskTriageNotes/ # This folder ID goes in GOOGLE_DRIVE_FOLDER_ID
├── daily/
│ ├── 20251225_074353.txt # Raw daily notes (text)
│ ├── 20251225_074353.png # Raw daily notes (image)
│ └── ...
├── weekly/
│ └── ...
└── monthly/
└── ...
Analysis files get saved locally instead:
LOCAL_OUTPUT_DIR/
├── daily/
│ └── 25_12_2025.triaged.txt # Generated analysis (DD_MM_YYYY format)
├── weekly/
│ └── week1_12_2025.triaged.txt # Generated weekly analysis (weekN_MM_YYYY format)
├── monthly/
│ └── 12_2025.triaged.txt # Generated monthly analysis (MM_YYYY format)
└── annual/
└── 2025.triaged.txt # Generated annual analysis (YYYY format)
Whether you're using External/USB or Google Drive, TaskTriage expects this structure:
notes/
├── 20251225_074353.txt # Raw daily notes (text)
├── 20251226_083000.png # Raw daily notes (image)
├── 20251226_083000.raw_notes.txt # Extracted text from PNG (auto-generated, editable)
├── 20251227_095000.pdf # Raw daily notes (PDF, single or multi-page)
├── 20251227_095000.raw_notes.txt # Extracted text from PDF (auto-generated, editable)
├── daily/
│ ├── 25_12_2025.triaged.txt # Generated analysis (DD_MM_YYYY.triaged.txt)
│ ├── 26_12_2025.triaged.txt # Generated analysis
│ ├── 27_12_2025.triaged.txt # Generated analysis
│ └── ...
├── weekly/
│ ├── week4_12_2025.triaged.txt # Generated weekly analysis (weekN_MM_YYYY.triaged.txt)
│ └── ...
├── monthly/
│ ├── 12_2025.triaged.txt # Generated monthly analysis (MM_YYYY.triaged.txt)
│ └── ...
└── annual/
└── 2025.triaged.txt # Generated annual analysis (YYYY.triaged.txt)
- Text files:
.txt - Image files:
.png,.jpg,.jpeg,.gif,.webp - PDF files:
.pdf(single or multi-page documents) - Raw text files:
.raw_notes.txt(auto-generated from image/PDF analysis, editable and re-analyzable)
Image and PDF files get run through Claude's vision API to extract your handwritten text automatically:
- Image files are processed directly as images
- PDF files are converted to images page-by-page, each page is processed with the vision API, then all extracted text is concatenated with page separators
The extracted text is saved as a .raw_notes.txt file, making it easy to edit the text directly in the UI if needed. If you edit a .raw_notes.txt file after its initial analysis, TaskTriage will detect the change and automatically re-analyze it on the next run.
Name your files with a timestamp prefix: YYYYMMDD_HHMMSS.ext
This lets TaskTriage figure out which file is most recent and which ones have already been analyzed.
How to mark up your handwritten notes:
- Completed tasks: Add a checkmark (✓)
- Removed/abandoned tasks: Mark with an (✗)
- Urgent tasks: Add an asterisk (*)
- Task ordering: List tasks in any order—TaskTriage will automatically analyze and group them by theme (Communication, Planning, Implementation, Administrative, etc.) during analysis
Not sure what your task notes should look like? Check out the example files in the tests/examples/ directory:
20251225_074353.txt: Example text file showing proper task formatting with categories (agents team, Admin, Home), task items, and completion markers20251225_074353_Page_1.png: Example PNG image of handwritten notes demonstrating how TaskTriage processes scanned/photographed task lists
These files demonstrate:
- Correct filename format with timestamp prefix (
YYYYMMDD_HHMMSS) - Task organization with category headers
- Multi-page support (using
_Page_Nsuffix for image files) - How TaskTriage handles both text and image inputs
You can use these as templates when creating your own task note files. The example files are also used in the test suite to ensure TaskTriage correctly processes real-world note formats.
To run TaskTriage:
# Using the full command
tasktriage
# or using the alias (if you set it up)
triage
# Specify file type preference (defaults to png)
tasktriage --files txt
tasktriage --files pngTaskTriage includes a web interface built with Streamlit. Launch it with:
# Using Task
task ui
# Or directly with uv
uv run streamlit run streamlit_app.pyThe UI opens in your browser at http://localhost:8501 and provides:
Left Panel (Controls)
- Sync Button - The first step in the workflow:
- Copies raw notes (images, PDFs, text files) from input directories to output directory
- Converts images/PDFs to editable
.raw_notes.txtfiles using Claude's vision API - Syncs all files (analyses, raw notes) back to input directories and Google Drive
- Provides real-time progress updates and comprehensive error reporting
- Analyze Button - Run the analysis pipeline on synced/converted files
- Only processes files that have been synced (image/PDF files need their
.raw_notes.txtcreated first) - Automatically triggers weekly/monthly/annual analyses when their conditions are met
- Only processes files that have been synced (image/PDF files need their
- Configuration - Edit
.envandconfig.yamlsettings directly in the browser (API keys, notes source, model parameters) - Raw Notes List - Browse
.txtand image files from your root notes directory, sorted by date- Open - Load a selected note file for editing
- New - Create a new empty
.txtnotes file with timestamp-based naming
- Analysis Files List - Browse all generated analysis files across daily/weekly/monthly/annual
Right Panel (Editor)
- Full-height text editor for viewing and editing selected files
- Image preview for handwritten note images
- Save/Revert buttons with unsaved changes indicator
- Notes source status display
- Quick Markup Tools - Easily add task markers (✓ completed, ✗ removed, * urgent, ↳ subtask), which are automatically interpreted at the right side of each line.
Recommended Workflow:
- Sync - Import new files and convert images/PDFs to text
- Review/Edit - Check the extracted
.raw_notes.txtfiles and fix any OCR errors - Analyze - Generate daily analyses; weekly/monthly/annual analyses trigger automatically when conditions are met
TaskTriage uses a two-stage workflow for managing analysis files:
Stage 1: Generation (Primary Output)
All new analysis files and extracted raw notes are initially saved to LOCAL_OUTPUT_DIR. This is the "source of truth" for all generated files. This approach provides:
- A centralized location for all generated analyses
- A backup location for your analysis history
- Support for OAuth 2.0 authentication (full read/write access without service account limitations)
Stage 2: Bidirectional Sync Once analyses are generated, you can use the Sync button in the web UI to perform true bidirectional synchronization between your output directory and all configured input directories:
Outbound Sync (Output → Input directories):
- To External/Local Directories: Analysis files and raw notes are copied via standard file operations
- To Google Drive: Files are uploaded to your configured Google Drive folder
- Real-time Progress: The UI shows live progress updates and reports any errors
Inbound Sync (Input directories → Output):
- Consolidation: Any new files found in your input directories are copied to the output directory
- Deduplication: Files that already exist in the output directory are skipped
- Multi-source support: If the same file exists in multiple input directories, it's only copied once
This bidirectional workflow ensures that:
- Your analyses are always backed up in
LOCAL_OUTPUT_DIR - Your note-taking device (USB/Supernote/reMarkable) stays synchronized with the latest analyses
- Google Drive users have full read/write access to upload and manage analyses on-demand
- New files added to any input location are automatically consolidated into your central output directory
- You have a true sync experience rather than one-directional file distribution
When to Use Sync:
- Before analyzing new image/PDF files - Sync converts them to editable
.raw_notes.txtfiles - After running Analyze - Distributes results to your devices and input directories
- Periodically to ensure all your locations (USB, local, Google Drive) stay in sync
- To consolidate notes from multiple input sources into your central output directory
TaskTriage uses a two-step workflow with automatic cascading for higher-level analyses:
STEP 1: Sync (run first)
The Sync operation prepares your files for analysis:
- Copies raw notes (images, PDFs, text files) from all input directories to the output directory
- Converts images and PDFs to
.raw_notes.txtfiles using Claude's vision API- PDF Processing: Multi-page PDFs are converted to images page-by-page, each page is processed, then all text is concatenated with page separators
- Syncs all files back to input directories and Google Drive
STEP 2: Analyze (when you're ready)
Daily analyses only run when you explicitly press the Analyze button:
- TaskTriage finds unanalyzed
.txtfiles and.raw_notes.txtfiles (converted from images/PDFs)- Image/PDF files without a corresponding
.raw_notes.txtare skipped (run Sync first!) - Smart re-analysis: Includes files that were edited after their last analysis
- Image/PDF files without a corresponding
- Processes them in parallel (up to 5 concurrent API calls)
- Each file gets analyzed and saved as
daily/{date}.triaged.txt- If re-analyzing an edited file, the new analysis replaces the old one (no duplicates)
- Shows progress in real-time with success/failure indicators
- Prints:
Daily Summary: X successful, Y failed
AUTOMATIC CASCADE: Weekly/Monthly/Annual Analyses
After daily analyses complete, TaskTriage automatically checks for and triggers higher-level analyses:
LEVEL 2: Weekly Analysis (auto-triggers when conditions are met)
After all daily analyses complete, TaskTriage checks if any weeks need analysis. A weekly analysis is triggered automatically when:
- 5+ weekday analyses exist for a work week (Monday-Friday), OR
- The work week has passed and at least 1 daily analysis exists for that week
When triggered:
- Collects all daily analysis files from Monday-Friday of the qualifying week
- Combines them with date labels
- Generates a comprehensive weekly analysis looking at patterns and problems
- Saves to
weekly/weekN_MM_YYYY.triaged.txt(e.g.,week4_12_2025.triaged.txt) - Prints:
Weekly Summary: X successful, Y failed
LEVEL 3: Monthly Analysis (auto-triggers when conditions are met)
After all weekly analyses complete, TaskTriage checks if any months need analysis. A monthly analysis is triggered automatically when:
- 4+ weekly analyses exist for a calendar month, OR
- The calendar month has ended and at least 1 weekly analysis exists for that month
When triggered:
- Collects all weekly analysis files from the qualifying month
- Combines them with week-range labels
- Generates a comprehensive monthly analysis synthesizing strategic patterns across the entire month
- Saves to
monthly/MM_YYYY.triaged.txt(e.g.,12_2025.triaged.txt) - Prints:
Monthly Summary: X successful, Y failed
LEVEL 4: Annual Analysis (auto-triggers when conditions are met)
After all monthly analyses complete, TaskTriage checks if any years need analysis. An annual analysis is triggered automatically when:
- 12 monthly analyses exist for a calendar year, OR
- The calendar year has ended and at least 1 monthly analysis exists for that year
When triggered:
- Collects all monthly analysis files from the qualifying year
- Combines them with month labels
- Generates a comprehensive annual analysis synthesizing year-long accomplishments, learning, and strategic opportunities
- Saves to
annual/YYYY.triaged.txt(e.g.,2025.triaged.txt) - Prints:
Annual Summary: X successful, Y failed
Summary: Daily analyses require explicit triggering (Sync → Analyze), but weekly/monthly/annual analyses cascade automatically once their conditions are met!
TaskTriage recognizes several special markers in your task lists that help identify task status and relationships. Use these notations to enhance your daily task lists:
✓(checkmark) - Task completed during the day✗(or X) - Task removed or abandoned during the day- No marker - Standard task that was planned but not completed
↳(rightwards arrow) - Indicates a subtask directly related to the task above it. Subtasks are typically indented with spaces and represent work that supports or elaborates on the parent task.
Example:
✓ Jason 1:1
↳ CEO simulator
✗ meet w/ Matt C
↳ discuss Q1 plans
In this example:
- "Jason 1:1" was completed, and "CEO simulator" is the related subtask (also completed)
- "meet w/ Matt C" was abandoned, and "discuss Q1 plans" is the related subtask (also abandoned)
Subtasks are analyzed independently but with full context of their parent task relationship, allowing TaskTriage to understand the work structure and how parent-subtask pairs correlate with completion success.
*(asterisk) - Marks urgent/high-priority tasks
Example:
✓ finish ECN bot fixes *
✗ ↳ meet w/ Matt C
This marks "finish ECN bot fixes" as critical/urgent.
The daily analysis gives you:
- Completion Summary: Clear breakdown of what was completed (✓), abandoned (✗), and left incomplete, with analysis of why each outcome occurred
- Execution Patterns: 3-5 concrete observations about which types of tasks succeed vs. fail, when your energy is highest, and what gets deferred
- Task Categorization by Trend: Automatic grouping of your tasks into thematic categories (Communication, Planning, Implementation, Administrative, Research/Learning, Meetings/Collaboration, Health/Wellness, Personal Projects, etc.) with completion rates and energy patterns per theme—reveals which categories of work consistently succeed vs. struggle
- Priority Alignment Assessment: Honest evaluation of whether urgent tasks were truly urgent, theme-based prioritization analysis, and what your completion patterns reveal about actual priorities vs. stated priorities
- Workload Realism Evaluation: Assessment of whether your planned workload was achievable, how accurate your time estimates were, and whether you stayed within healthy limits (6-7 hours focused work)
- Task Design Quality: Analysis of how task clarity, scope, and actionability influenced execution—identifying which tasks were well-designed vs. poorly-designed
- Tomorrow's Priority Queue: Ranked list of incomplete tasks for the next day, organized by priority tier (High/Medium/Lower) with rationales for each task's placement—informed by today's execution patterns
- Key Takeaways: 3-5 specific, actionable recommendations for improving future planning based on today's execution patterns, including theme-specific focus areas
The weekly analysis shows you:
- Key Behavioral Findings: Thematic success patterns (e.g., "Communication tasks completed 90%, Implementation tasks completed 40%") and how well daily priority queues predicted actual execution
- Completion & follow-through analysis: Where do you keep deferring stuff, organized by task theme?
- Mis-prioritization detection: What you said was important vs. what you actually did; theme-based priority failures (which themes were marked urgent but failed?)
- Scope & estimation accuracy: How wrong were your time estimates per theme? (It's okay, we're all bad at this)
- Energy alignment analysis: Are you scheduling high-energy tasks by theme when you're exhausted?
- Corrected priority model based on your actual behavior, organized by thematic task categories
- Next-week planning strategy with realistic capacity assumptions, theme-specific guidance, and recommended daily allocation across themes (e.g., "40% Communication, 30% Implementation, 20% Planning, 10% Administrative")
The monthly analysis synthesizes your entire month to show you:
- Monthly achievements summary: Major accomplishments organized by category (Work, Personal, System)
- Strategic patterns and trends: 3-5 month-level patterns including thematic completion trends (which task themes consistently succeeded vs. struggled?), execution rhythms, capacity trends, and priority accuracy
- System evolution assessment: Which weekly recommendations actually got implemented? Which ones worked? Did theme-specific improvements stick?
- Persistent challenges: Problems that survived multiple weekly corrections—organized by theme to reveal systemic issues in particular categories of work
- Monthly performance metrics: Completion rates (overall and per-theme), workload balance, priority alignment, energy management, planning quality, and theme-specific improvements
- Strategic guidance for next month: Month-level priorities, theme-based capacity allocation, theme-specific focus areas, and recommended daily distribution across themes
- Long-term system refinements: 3-6 fundamental changes to try in your planning system, informed by theme-specific insights
Monthly analyses are strategic level, not tactical. They reveal patterns invisible at the weekly level and help you understand which categories of work consistently succeed or struggle, plus your actual productivity rhythms over time.
The annual analysis synthesizes your entire year to show you:
- Year in accomplishments: Your major wins and achievements across the full calendar year, organized by category and impact
- Learning & skill development: Areas where you've grown professionally and personally, with task execution mastery tracked by theme (e.g., "Communication efficiency improved 40% from Q1 to Q4")
- Highest-impact opportunities: 2-4 specific improvements ranked by ROI that would generate the most leverage in the year ahead, informed by theme-specific performance data and persistent challenges
- Year-ahead strategic direction: Recommendations for next year's focus areas, theme-based capacity allocation, seasonal patterns, and systemic changes based on your year's thematic performance
Annual analyses are strategic and retrospective. They help you see the big picture—what you actually accomplished beyond the day-to-day grind, which categories of work have the strongest ROI, and what's worth focusing on next year. This is where you look back at the full story of your year, track improvements in specific task themes, and plan next year's resource allocation.
TaskTriage organizes raw notes at the top level and analyses in subdirectories:
Notes/
├── 20251225_074353.txt # Your daily task notes (text)
├── 20251225_074353.raw_notes.txt # Extracted text (auto-generated, editable)
├── 20251226_094500.png # Your daily task notes (image)
├── 20251226_094500.raw_notes.txt # Extracted text from PNG (auto-generated)
├── 20251227_120000.pdf # Your daily task notes (PDF)
├── 20251227_120000.raw_notes.txt # Extracted text from PDF (auto-generated)
├── daily/
│ ├── 25_12_2025.triaged.txt # Analysis output (DD_MM_YYYY.triaged.txt)
│ ├── 26_12_2025.triaged.txt # Analysis output
│ └── 27_12_2025.triaged.txt # Analysis output
├── weekly/
│ ├── week4_12_2025.triaged.txt # Week 4 of Dec 2025 (weekN_MM_YYYY.triaged.txt)
│ └── week1_01_2026.triaged.txt # Week 1 of Jan 2026
├── monthly/
│ ├── 12_2025.triaged.txt # December 2025 synthesis (MM_YYYY.triaged.txt)
│ └── 11_2025.triaged.txt # November 2025 synthesis
└── annual/
└── 2025.triaged.txt # Full year 2025 synthesis (YYYY.triaged.txt)
Filename formats:
- Daily notes:
YYYYMMDD_HHMMSS.{txt|png|jpg|pdf|...}(e.g.,20251225_074353.txtor20251225_074353.pdf) - Raw text from images/PDFs:
YYYYMMDD_HHMMSS.raw_notes.txt(auto-generated when analyzing image or PDF files) - Daily analyses:
DD_MM_YYYY.triaged.txt(e.g.,25_12_2025.triaged.txt) - Weekly analyses:
weekN_MM_YYYY.triaged.txt(e.g.,week4_12_2025.triaged.txtfor week 4 of December 2025) - Monthly analyses:
MM_YYYY.triaged.txt(e.g.,12_2025.triaged.txtfor December 2025) - Annual analyses:
YYYY.triaged.txt(e.g.,2025.triaged.txtfor full year 2025)
If you're using Task for automation, here are the available commands:
task setup # Full first-time setup (venv + install + env)
task setup:env # Create .env file from template
task setup:output-dir # Create analysis output directory (Google Drive users need this)
task install # Install dependencies with uv
task venv # Create virtual environment
task sync # Sync dependencies from lock file
task lock # Update the lock file
task test # Run tests
task ui # Launch the Streamlit web interface
task alias # Add triage shell alias
task alias:remove # Remove shell alias
task clean # Remove build artifacts
task clean:all # Nuclear option: remove everything including venv
task bump # Show version bump options
task bump:patch # Bump patch version (e.g. 0.1.1 → 0.1.2)
task bump:minor # Bump minor version (e.g. 0.1.1 → 0.2.0)
task bump:major # Bump major version (e.g. 0.1.1 → 1.0.0)The project has a test suite using pytest. Tests are split up by module so you can find what you're looking for.
# Run all tests
pytest
# Run with verbose output to see what's actually happening
pytest -v
# Run tests for a specific module
pytest tests/test_config.py
pytest tests/test_files.py
pytest tests/test_gdrive.py
# Get a coverage report to see what you missed
pytest --cov=tasktriage --cov-report=term-missing
# Skip the slow integration tests
pytest -m "not slow"tests/
├── conftest.py # Shared fixtures (temp directories, mock data)
├── test_config.py # Configuration and environment tests
├── test_prompts.py # Prompt template tests
├── test_image.py # Image extraction tests
├── test_gdrive.py # Google Drive integration tests
├── test_files.py # File I/O operation tests
├── test_analysis.py # Core analysis function tests
└── test_cli.py # CLI entry point tests
Tests use unittest.mock (Mock, MagicMock, patch) to avoid:
- Actually calling Claude or Google Drive APIs (and burning through your API credits)
- Messing with the file system
- Network dependencies that make tests flaky
Example of mocking the Claude API:
from unittest.mock import patch, MagicMock
def test_analyze_tasks():
with patch("tasktriage.analysis.ChatAnthropic") as mock_llm:
mock_instance = MagicMock()
mock_response = MagicMock()
mock_response.content = "Analysis result"
mock_instance.invoke.return_value = mock_response
mock_llm.return_value = mock_instance
# Your test code hereHere's how the code is organized:
tasktriage/
├── .env.template # Environment variables template
├── .bumpversion.toml # Version bump configuration
├── config.yaml # Claude model configuration
├── pyproject.toml # Project dependencies and metadata
├── Taskfile.yml # Task runner configuration
├── streamlit_app.py # Web interface (Streamlit UI)
├── README.md
├── tasktriage/ # Python package
│ ├── __init__.py # Package exports
│ ├── config.py # Configuration and environment handling
│ ├── prompts.py # LangChain prompt templates
│ ├── image.py # Image text extraction
│ ├── files.py # File I/O operations (External + Google Drive)
│ ├── gdrive.py # Google Drive API integration
│ ├── analysis.py # Core analysis functionality
│ └── cli.py # Command-line interface
└── tests/ # Test suite
├── conftest.py # Shared pytest fixtures
├── test_config.py
├── test_prompts.py
├── test_image.py
├── test_gdrive.py
├── test_files.py
├── test_analysis.py
└── test_cli.py
You can also use TaskTriage as a library in your own Python code:
from tasktriage import (
analyze_tasks,
get_daily_prompt,
get_weekly_prompt,
get_monthly_prompt,
get_annual_prompt,
load_all_unanalyzed_task_notes,
collect_weekly_analyses_for_week,
collect_monthly_analyses_for_month,
collect_annual_analyses_for_year,
extract_text_from_image,
extract_text_from_pdf,
GoogleDriveClient,
get_active_source,
)
# Check which source is being used for output
print(f"Using: {get_active_source()}") # "usb", "local", or "gdrive"
# Get prompt templates with dynamic variables
daily_prompt = get_daily_prompt()
print(daily_prompt.input_variables) # ['current_date', 'task_notes']
weekly_prompt = get_weekly_prompt()
print(weekly_prompt.input_variables) # ['week_start', 'week_end', 'task_notes']
monthly_prompt = get_monthly_prompt()
print(monthly_prompt.input_variables) # ['month_start', 'month_end', 'task_notes']
annual_prompt = get_annual_prompt()
print(annual_prompt.input_variables) # ['year', 'task_notes']
# Load all unanalyzed daily notes
unanalyzed = load_all_unanalyzed_task_notes("daily", "png")
for content, path, date in unanalyzed:
print(f"Found: {path.name}")
# Collect analyses for a specific period
from datetime import datetime
month_start = datetime(2025, 12, 1)
month_end = datetime(2025, 12, 31)
monthly_content, output_path, ms, me = collect_monthly_analyses_for_month(month_start, month_end)
# Collect annual analyses for a specific year
annual_content, output_path, year = collect_annual_analyses_for_year(2025)
# Use the Google Drive client directly with OAuth credentials
from tasktriage import get_oauth_credentials
credentials = get_oauth_credentials() # Returns stored OAuth 2.0 credentials
client = GoogleDriveClient(credentials=credentials)
files = client.list_notes_files("daily")"OAuth credentials required"
- Make sure
GOOGLE_OAUTH_CLIENT_IDandGOOGLE_OAUTH_CLIENT_SECRETare set in your.envfile - Authenticate via the web UI by clicking "Sign in with Google" in the Configuration section
"Subfolder 'daily' not found in Google Drive folder"
- You need to create
dailyandweeklysubfolders in your Google Drive notes folder - Make sure
GOOGLE_DRIVE_FOLDER_IDpoints to the correct folder
"Permission denied" errors
- Make sure you've authenticated with Google Drive via the web UI
- Try revoking access at Google Account Settings and re-authenticating
"No unanalyzed notes files found"
- Your notes files need to follow the naming format:
YYYYMMDD_HHMMSS.txtor.png - Make sure they're in the root notes directory (not in a subfolder)
- TaskTriage is looking for files that don't have a matching
.triaged.txtfile in thedaily/subdirectory
"No input directories configured or available"
- Make sure at least one of
EXTERNAL_INPUT_DIRorLOCAL_INPUT_DIRis set in your.envfile - Verify the paths are correct and the directories actually exist
"USB directory not found"
- Is your USB device actually plugged in and mounted?
- Check that the
EXTERNAL_INPUT_DIRpath in your.envfile is correct and points to the right location - Raw notes should be in the root directory;
daily/,weekly/,monthly/, andannual/subdirectories are created automatically for analysis files
"Local directory not found"
- Verify the
LOCAL_INPUT_DIRpath exists - Raw notes should be in the root directory; analysis subdirectories (
daily/,weekly/,monthly/,annual/) are created automatically
How do I fix mistakes in my analyzed notes?
- Simply edit the
.txtor.raw_notes.txtfile in the UI or your text editor - Save your changes
- Run the analysis again—TaskTriage automatically detects the file was modified after its analysis and will re-analyze it
- The new analysis replaces the old one (same filename), so you won't have duplicate analysis files
What files trigger re-analysis?
.txtfiles that were modified after their.triaged.txtwas created.raw_notes.txtfiles (extracted from images or PDFs) that were edited after their analysis- The original image (
.png,.jpg, etc.) or PDF (.pdf) file itself if it was replaced with a newer version
When does re-analysis NOT happen?
- If the notes file is older than its analysis file (no changes detected)
- For files without any existing analysis (these are treated as new files, not re-analysis)
MIT

