Skip to content

Help wanted: A/B test smriti context on your projects #13

@ashu17706

Description

@ashu17706

What is this?

smriti context generates a compact project summary (~200-300 tokens) from your session history and injects it into .smriti/CLAUDE.md, which Claude Code auto-discovers. The idea is that new sessions start with awareness of recent work — hot files, git activity, recent sessions — instead of re-discovering everything from scratch.

We don't know yet if this actually saves tokens. Our initial tests show mixed results, and we need data from real projects to understand where context injection matters.

How to test

Prerequisites

smriti ingest claude    # make sure sessions are ingested

Step 1: Baseline session (no context)

mv .smriti/CLAUDE.md .smriti/CLAUDE.md.bak

Start a new Claude Code session, give it a task, let it finish, exit.

Step 2: Context session

mv .smriti/CLAUDE.md.bak .smriti/CLAUDE.md
smriti context

Start a new Claude Code session, give the exact same task, let it finish, exit.

Step 3: Compare

smriti ingest claude
smriti compare --last

What to share

Post a comment here with:

  1. The task prompt you used (same for both sessions)
  2. The smriti compare output (copy-paste the table)
  3. Project size — rough number of files, whether you have a detailed CLAUDE.md in the repo
  4. Your observations — did the context-aware session behave differently? Fewer exploratory reads? Better first attempt?

What we've found so far

Task Type Context Impact Notes
Knowledge questions ("how does X work?") Minimal Both sessions found the right files immediately from project CLAUDE.md
Implementation tasks ("add --since flag") Minimal Small, well-scoped tasks don't need exploration
Ambiguous/exploration tasks Untested Expected sweet spot — hot files guide Claude to the right area
Large codebases (no project CLAUDE.md) Untested Expected sweet spot — context replaces missing documentation

Good task prompts to try

These should stress-test whether context helps:

  • Ambiguous bug fix: "There's a bug in the search results, fix it" (forces exploration)
  • Cross-cutting feature: "Add logging to all database operations" (needs to find all DB touchpoints)
  • Continuation task: "Continue the refactoring we started yesterday" (tests session memory)
  • Large codebase, no CLAUDE.md: Any implementation task on a project without a detailed CLAUDE.md

Tips

  • Use smriti compare --json for machine-readable output
  • You can compare any two sessions: smriti compare <id-a> <id-b> (supports partial IDs)
  • Run smriti context --dry-run to see what context your sessions will get

Metadata

Metadata

Assignees

No one assigned

    Labels

    help wantedExtra attention is needed

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions