Implement push_changes in ContextSyncService#37
Implement push_changes in ContextSyncService#37google-labs-jules[bot] wants to merge 768 commits intomainfrom
Conversation
Phase 1 of PRD-003 claudectl implementation: - Add SmartDefaults module for auto-detection of output format, artifact type, project path, and collection - Add --smart-defaults global CLI flag - Add quick-add command with type auto-detection - Wire SmartDefaults into deploy, remove, undeploy commands - Add --format and --force options to commands - Create bash completion script - Add exit codes module (0-5 standard codes) - Create alias install/uninstall commands for wrapper setup New files: - skillmeat/defaults.py - SmartDefaults class - skillmeat/wrapper.py - Wrapper script generator - skillmeat/exit_codes.py - Exit code standards - bash/claudectl-completion.bash - Bash completion Tests: 83 passed, 2 skipped (85 total) Refs: PRD-003 Phase 1 (P1-T1 through P1-T11) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 2 of PRD-003 claudectl implementation: - Enhance search command with --format option and smart defaults - Add smart defaults to sync-check, sync-pull, sync-preview commands - Enhance diff command with --stat mode and JSON output - Add config get/set/list commands with smart defaults - Add active-collection command for switching collections - Create zsh completion script - Create fish completion script - Write claudectl quick start guide Shell completions: - zsh/_claudectl - Full zsh completion with subcommands - fish/claudectl.fish - Fish completion script Documentation: - .claude/docs/claudectl-quickstart.md - Installation and first 5 commands Refs: PRD-003 Phase 2 (P2-T1 through P2-T8) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add comprehensive bash scripting examples for SkillMeat CLI automation: - Example 1: Deploy bundle - deploy predefined artifacts with error handling - Example 2: Check status - JSON parsing with jq for deployment verification - Example 3: Sync artifacts - check and apply updates across collection - Example 4: Backup collection - export with manifest and metadata - Example 5: Install bundle - import and deploy from bundle with validation - Example 6: CI/CD integration - GitHub Actions/GitLab CI setup pattern - Example 7: Retry logic - error handling with exponential backoff - Example 8: Batch deploy - deploy multiple artifacts with JSON reporting - Example 9: Audit report - detailed collection state audit with statistics Features: - 664 lines with 9+ complete, production-ready examples - Proper bash error handling with early returns - jq JSON parsing for CI/CD integration - Retry mechanisms with exponential backoff - Utility functions for common operations - Complete usage documentation Satisfies P3-T2 acceptance criteria: ✓ 5+ CI/CD workflow examples ✓ Deploy bundle, check status, error handling included ✓ JSON parsing with jq throughout ✓ Proper bash error handling ✓ ~300+ LOC with comprehensive comments 🤖 Generated with Claude Code Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Phase 3 of PRD-003 claudectl implementation: - Create comprehensive user guide (docs/claudectl-guide.md) - Add scripting examples for CI/CD (docs/claudectl-examples.sh) - Generate man page (man/claudectl.1) - Add shell compatibility tests - Expand integration test suite (91 tests) Documentation: - docs/claudectl-guide.md - Complete reference for all 14+ commands - docs/claudectl-examples.sh - 9 CI/CD workflow examples - man/claudectl.1 - Unix man page Tests: - tests/test_shell_compatibility.py - Bash/zsh/fish validation - tests/test_claudectl_workflows.py - 91 tests (85 pass, 6 skip) Fixes: - Fixed import errors in test_cli_core.py - Fixed import errors in test_cli_marketplace_integration.py Refs: PRD-003 Phase 3 (P3-T1 through P3-T6) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All 25 tasks completed across 3 phases: - Phase 1: 11 tasks (Core MVP) - Phase 2: 8 tasks (Management commands) - Phase 3: 6 tasks (Documentation & Polish) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The claudectl wrapper was failing with "No such option: --smart-defaults" because the flag was never defined on the main @click.group() decorator. - Added --smart-defaults flag (hidden) to main group - Added @click.pass_context to enable context passing - Initialize ctx.obj["smart_defaults"] for subcommands to access Fixes PRD-003 Task P1-T2 incomplete implementation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
feat: Add skillmeat-cli skill, claudectl alias, and confidence scoring system
Replace raw fetch() calls with apiRequest() helper to ensure API
requests are routed to the correct backend server (port 8080) instead
of Next.js internal routing (port 3000).
- Add apiRequest import from @/lib/api
- Replace 5 raw fetch() calls with apiRequest():
- upstream-diff query
- project-diff query
- sync mutation
- deploy mutation
- take-upstream mutation
This fixes the "ApiError: Request failed" error when navigating to
the Sync Status tab on the /projects/{id}/ page.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Update release notes with all implemented features (Notification System, Collections Navigation, Groups, Persistent Cache, Marketplace GitHub Ingestion, Context Entities, Artifact Flow Modal, Tags Refactor) - Complete CLI reference with 92+ commands across 25 command groups - Update web-ui-guide with Collections Navigation and Notification Center docs - Update quickstart with web interface section and current examples - Add YAML frontmatter to searching.md and team-sharing-guide.md - Verify all feature guides (marketplace, MCP, team sharing, syncing, searching) - Mark all documentation cleanup tasks as complete in implementation plan 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Human-focused documentation covering: - Natural language artifact management overview - Core capabilities (discovery, deployment, bundles, templates) - Quick start examples for users and AI agents - Agent integration patterns (gap detection, confidence scoring) - File structure and navigation guide 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 1 of marketplace-github-ingestion-remediation: - Uncommented heuristic detector import and initialization - Wired detect_artifacts_in_tree in scan_repository() and scan_github_source() - Wired catalog entry creation from scan results in rescan_source() Fixes: Scans now return detected artifacts instead of empty list 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Phase 2 of marketplace-github-ingestion-remediation: - Added Alembic migration for enable_frontmatter_detection column - Updated MarketplaceSource model with new boolean field - Updated Pydantic schemas (Create, Update, Response) - Enhanced HeuristicDetector with frontmatter parsing support - Added toggle UI to add/edit source modals The toggle allows repos with non-standard directory names to be detected via YAML frontmatter in markdown files. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The detect_artifacts_in_tree function returns DetectedArtifact Pydantic objects, not dicts. Updated artifact_type access to use attribute access with fallback for dict compatibility. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…acking - Implementation plan for PR #26 gap remediation - Phase 1 progress tracking (marked complete) - Gap analysis worknotes and quick reference 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Captures gap analysis and proposed enhancements for: - Configurable storage paths (CLI/Web/Config) - Offline/Online mode toggle - Web UI settings page Context from marketplace remediation Phase 3 analysis. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement the previously stubbed import coordinator download functionality to complete the marketplace-to-collection import flow: - Add _download_artifact method with GitHub API file download - Parse GitHub URLs (tree/blob/root formats) - Recursive directory download - Rate limiting with exponential backoff - Binary vs text file handling - Add _update_manifest method to update collection manifest.toml - Use ManifestManager for atomic writes - Create Artifact with proper metadata - Handle duplicate artifacts (overwrite scenario) - Wire download flow in _process_entry replacing stub - Download files to target directory - Update manifest on success - Proper error handling and logging - Fix tests to mock HTTP calls - Add mock_download fixture - Add mock_manifest fixture - All 36 import coordinator tests pass Refs: Phase 3, REM-3.1, REM-3.2, REM-3.3 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The SourceResponse schema required the field but source_to_response() wasn't passing it, causing Pydantic validation errors. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
ScanResultDTO only had artifacts_found (count) but router needed to iterate over actual artifacts list. Added: - artifacts: List[DetectedArtifact] field to ScanResultDTO schema - artifacts parameter to GitHubScanner.scan_repository() returns - model_rebuild() call to resolve forward reference Fixes scan failing with "'ScanResultDTO' object has no attribute 'artifacts'" 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix Sync button: wire onRescan prop in sources page to trigger rescan mutation with proper cache invalidation and toast notifications - Fix Edit/Delete buttons: move from overlapping header position to footer bar, left of Rescan button for proper layout - Refactor Open button: rename to "Source" with ExternalLink icon, opens GitHub repo in new tab instead of internal navigation - Preserve card click navigation to detail page 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…ion plan Add planning and tracking artifacts for marketplace confidence score improvements: - PRD: Tooltip breakdown, filtering, score normalization - Implementation Plan: 6 phases, 40 tasks, 21 story points - Progress tracking: Phase 1-2 (backend), Phase 3-5 (frontend), Phase 6 (testing) - Context file: Technical notes and key decisions Key improvements: - Fix scoring algorithm (max 65 → 0-100 normalized) - Add ScoreBreakdown component (reusable in modal/tooltip) - Add confidence filter controls with URL persistence - Show hidden low-confidence artifacts on toggle 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add MAX_RAW_SCORE=65 constant and normalize_score() function (TASK-1.1) - Create Alembic migration for raw_score and score_breakdown columns (TASK-2.1) - Add min_confidence, max_confidence, include_below_threshold params (TASK-2.5) Phase 1-2, Batch 1 of confidence-score-enhancements 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…nd filters - Refactor _score_directory() to return breakdown dict (TASK-1.2) - Add raw_score and score_breakdown ORM columns (TASK-2.2) - Create data migration to populate raw_score (TASK-2.9) - Implement confidence range filter logic (TASK-2.6) - Implement low-confidence toggle with CONFIDENCE_THRESHOLD=30 (TASK-2.7) Phase 1-2, Batch 2 of confidence-score-enhancements 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…icMatch - Implement complete breakdown dict with normalized_score (TASK-1.3) - Add raw_score and breakdown fields to HeuristicMatch schema (TASK-1.6) - Pass breakdown through to HeuristicMatch construction - Add validation constraints (0-65 for raw_score) Phase 1-2, Batch 3 of confidence-score-enhancements 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Verify normalization integration in detector (TASK-1.4) - Add raw_score and score_breakdown to CatalogEntryResponse (TASK-2.3) - Hydrate breakdown data in catalog query responses (TASK-2.4) - Complete data flow from DB to API response Phase 1-2, Batch 4 of confidence-score-enhancements 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add 15 unit tests for normalize_score() and breakdown structure (TASK-1.5) - Add 13 integration tests for confidence filtering endpoints (TASK-2.8) - Test edge cases: 0, negative, max, threshold interactions - Test response includes raw_score and score_breakdown fields Phase 1-2, Batch 5 of confidence-score-enhancements 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
useProjectCache was using the same query key ['projects', 'list'] as useProjects, but storing different data shapes: - useProjects: ProjectSummary[] (array) - useProjectCache: ProjectsResponse (object with items + page_info) This caused "map is not a function" errors in UnifiedEntityModal when navigating from /projects to /projects/[id] because the cached object was returned instead of the expected array. Fix: Give useProjectCache a distinct key ['projects', 'list', 'with-cache-info'] Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Filter artifacts passed to BulkImportModal to only include new ones (matchType === 'none') instead of all discovered artifacts - Update status label and badge logic to use collection_match/collection_status data instead of always showing "Already in Collection" - Add getEffectiveMatchType() helper to extract match type from artifact - Status now correctly shows: - "New - Will add to Collection & Project" (green) for new artifacts - "Already in Collection, will add to Project" (blue) for exact/hash matches - "Similar artifact exists - Review needed" (yellow) for name_type matches Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Two issues fixed: 1. YAML frontmatter parsing: Quote argument-hint values containing square brackets in 16 command markdown files. YAML interprets unquoted [text] as flow sequences, causing parse errors. 2. Bulk import error handling: Change validation failure behavior from failing entire batch (422) to gracefully skipping failed artifacts. Valid artifacts now import while failed ones return with status="failed" and detailed error messages. Changes: - Quote argument-hint values in .claude/commands/**/*.md files - Modify bulk_import_artifacts() to track validation failures per-artifact instead of raising 422 - Build combined results with validation failures + import results - Update tests to expect graceful error handling Root cause: YAML parser interpreted [--impl-only|-i] as array start Resolves: REQ-20260109-skillmeat-02 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…check Bug 1: Importer directory validation - Updated _validate_artifact_structure() to use ARTIFACT_SIGNATURES - Skills (is_directory=True) require directory with SKILL.md - Commands/agents (is_directory=False) allow single .md files - Added validation for .md extension and YAML frontmatter Bug 2: Discovery existence check - Updated check_artifact_exists() to use ARTIFACT_SIGNATURES - For file-based artifacts, search for direct files, nested files, and legacy directory format (backwards compatibility) - Fixed manifest fallback to apply regardless of artifact type Also: - Removed xfail markers from test_discovery_nested.py (tests now pass) - Added implementation plan doc for bug fixes Refs: quick-feature/discovery-import-bugs-1-2 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
…rsing This commit resolves three bugs in the discovery and import system: 1. **save_collection signature mismatch**: Removed extra `collection_name` argument from save_collection() calls. The method derives the path from collection.name internally. - Fixed in: skillmeat/core/importer.py (line 414) - Fixed in: skillmeat/api/routers/mcp.py (5 locations) 2. **YAML frontmatter parsing errors**: Wrapped description fields in quotes to prevent YAML parser from misinterpreting embedded colons (like `Examples:` and `Context:`) as key-value separators. - Fixed 29 agent files in .claude/agents/ - Fixed 1 command file in .claude/commands/review/ 3. **Clarification: Skills ARE being detected**: Investigation confirmed skills are being discovered correctly (19 found). The initial bug report was based on misinterpreting YAML parsing errors. Root cause: The description fields contained unquoted strings with embedded `<example>Context: user:` patterns that YAML parsed as nested mappings. Verification: All 127 artifacts now discovered (51 agents, 57 commands, 19 skills) with only 4 expected hook structure warnings remaining. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Hooks can be a mix of file types (.sh scripts, .md files, etc.) which made individual file detection unreliable. This implements a simpler directory-based approach: - If .claude/hooks/ has NO subdirectories: treat entire directory as ONE hook artifact named "hooks-root" - If .claude/hooks/ HAS subdirectories: each subdirectory becomes an individual hook artifact - Loose files at the hooks root alongside subdirs create a "hooks-root" artifact Changes: - artifact_detection.py: Set hooks as is_directory=True - discovery.py: Add _scan_hooks_directory() and _create_hook_artifact() methods with special handling for hook type - Fixed type annotations throughout discovery.py - Updated test fixtures to match new behavior This eliminates "Invalid artifact structure" errors for shell script hooks that don't have YAML frontmatter. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Plan: .claude/progress/quick-features/discovery-show-all-artifacts.md
Files Changed: 3
Tests: 136 passed (all discovery-related tests)
Build: Success
Summary
The Discovery flow now returns ALL detected artifacts from a Project, properly categorized by their Collection status:
┌─────────────────────┬───────────────────────┬───────────────────────────────────────────────────────┐
│ Section │ collection_match.type │ Description │
├─────────────────────┼───────────────────────┼───────────────────────────────────────────────────────┤
│ Exact Matches │ exact, hash │ Already in Collection (95 artifacts for this project) │
├─────────────────────┼───────────────────────┼───────────────────────────────────────────────────────┤
│ Possible Duplicates │ name_type │ Similar name found in Collection (1 artifact) │
├─────────────────────┼───────────────────────┼───────────────────────────────────────────────────────┤
│ New Artifacts │ none │ Not in Collection, ready to import (33 artifacts) │
└─────────────────────┴───────────────────────┴───────────────────────────────────────────────────────┘
Changes Made
1. skillmeat/core/discovery.py (+137/-86 lines)
- Removed filtering logic that excluded artifacts in both Collection and Project
- Returns ALL discovered artifacts with collection_match populated
- Fixed bug: Collection path was {base}/artifacts/{type}s/ but should be {base}/{type}s/
- Added logic to downgrade fuzzy matches to name_type when exact check fails
2. tests/core/test_discovery_prescan.py (+64/-45 lines)
- Updated test fixtures to match actual Collection structure (no artifacts/ subdirectory)
3. skillmeat/core/tests/test_discovery_service.py (+51/-41 lines)
- Updated test assertions to match new behavior (all artifacts returned, not just importable)
Verification
Tested on the skillmeat project itself:
- 129 total artifacts discovered
- 95 exact matches (already in Collection)
- 1 possible duplicate (notebooklm → notebooklm-skill)
- 33 new artifacts (importable)
- chrome-devtools now correctly shows as exact match in Collection
Implement cursor-based infinite scrolling for the Collection page to handle large collections efficiently. Previously limited to 100 artifacts loaded at once. Changes: - Add useIntersectionObserver hook for scroll detection - Add fetchCollectionArtifactsPaginated API function with cursor support - Add useInfiniteCollectionArtifacts hook using TanStack Query's useInfiniteQuery - Update Collection page to use infinite scroll with load-more trigger - Show artifact count indicator (X of Y artifacts) - Display loading spinner when fetching next page The backend already supports cursor-based pagination via limit/after parameters. This change enables the frontend to leverage that for better UX with large artifact collections. Refs: quick-feature/collection-infinite-scroll Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The initial implementation only added infinite scroll for specific collection views. The "All Collections" view (when no collection selected) was still using useArtifacts hook with a hardcoded 100-artifact limit. This caused: - Only first 100 artifacts displayed - Skills beyond 100th position didn't show - Sorting only affected loaded artifacts Changes: - Add fetchArtifactsPaginated() API function with cursor support - Add useInfiniteArtifacts hook using TanStack Query's useInfiniteQuery - Update Collection page to use infinite scroll for BOTH views - Unified pagination state (fetchNextPage, hasNextPage) for all modes Now both "All Collections" and specific collection views properly paginate through all artifacts using cursor-based pagination. Refs: quick-feature/collection-infinite-scroll Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The collection page header badge was showing the number of currently loaded artifacts (filteredArtifacts.length) instead of the total count from the API (totalCount). This was confusing because the "Showing x of y artifacts" text below correctly displayed both the loaded count and total, but the header only showed the loaded count which increased as users scrolled. Root cause: Line 594 passed filteredArtifacts.length instead of totalCount to the CollectionHeader artifactCount prop. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Normalization changes (updated FE extractors to emit api_endpoint: IDs, normalize paths/params, apply /api/{version} prefix, and add raw_path, raw_method, method_inferred; added helpers in frontend_utils.py; updated codebase-graph-spec.md).
Update request logging instructions with new Rules file.
When artifacts were added to user collections but not cached in the database, the metadata service fallback returned type="unknown" which broke the frontend modal display showing "Entity type 'unknown' is not yet supported for detailed display." Root cause: The fallback didn't parse the artifact_id (format: type:name) to extract the actual type. Changes: - Add _parse_artifact_id() helper to extract type and name from IDs - Update get_artifact_metadata() fallback to use parsed values - Artifacts with ID "agent:my-agent" now correctly return type="agent" Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Single-file artifacts (agents, commands stored as .md files) were returning
404 when trying to view their contents. The file endpoints assumed all
artifacts were directories containing files, but single-file artifacts
have artifact.path pointing directly to the file.
Root cause: Endpoints constructed paths like:
artifact_root / file_path = agents/prd-writer.md/prd-writer.md (invalid)
Fix: Added is_single_file_artifact detection in all file endpoints:
- GET /{artifact_id}/files/{file_path}: Return content if path matches
- PUT /{artifact_id}/files/{file_path}: Allow updates if path matches
- POST /{artifact_id}/files/{file_path}: Reject with 400 (single-file)
- DELETE /{artifact_id}/files/{file_path}: Reject with 400 (use artifact delete)
The list_artifact_files endpoint already handled this correctly.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Rules are always loaded into context, so slim them to essential guidance only. Detailed patterns moved to on-demand context files. Changes: - Slim rules from ~1200 lines to ~240 lines (~12K tokens saved/session) - Create 5 new key-context files with full patterns and examples - Update CLAUDE.md Progressive Disclosure section Rules now contain: - Prime directives and critical conventions - Quick reference tables - Links to detailed context files New context files: - debugging-patterns.md - Bug categories, delegation patterns - router-patterns.md - Full FastAPI examples - component-patterns.md - React/shadcn patterns - nextjs-patterns.md - App Router patterns - testing-patterns.md - Jest/Playwright templates Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add __main__.py unified pipeline orchestrator that runs extract → tag → split → validate in one command - Add post-tool hook for automatic symbol updates on code changes - Add pre-commit validation hook for symbol file integrity - Fix init_symbols.py to detect package-based structures (e.g., skillmeat/api/, skillmeat/web/) by checking for Python packages with __init__.py - Update README.md with unified pipeline documentation - Update symbols.config.json with correct project paths The unified pipeline supports: - --domain flag (all, ui, web, api) - --skip-split and --skip-validate flags - --changed-only for incremental updates - --verbose for detailed output Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The Contents and Sync Status tabs had `flex` in their TabsContent className, which overrode Radix UI's `display: none` for inactive tabs. This caused multiple tabpanels to render with `display: flex` and split the available space. Fix: Use `data-[state=active]:flex` variant so flex display only applies when the tab is active. Content height now correctly fills the modal. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implemented the logic to write content to deployed files and update deployment records in ContextSyncService.push_changes. This involves: - Reading the artifact content from the cache database. - Writing the content to the deployed file path. - Computing the new content hash. - Updating the deployment record in .skillmeat-deployed.toml with the new hash and timestamp.
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
📊 API Performance Benchmark Results
|
Implemented
push_changesinContextSyncServiceto enable pushing context entity changes from collection to deployed projects.Key changes:
CacheManager.DeploymentTrackerwith new content hash and timestamp.datetimeimport.compute_content_hashusage to pass string content instead of Path object.PR created automatically by Jules for task 105786062713227212 started by @miethe