Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
da2035d
feat: implement AST-based code intelligence indexing system
deanq Jan 28, 2026
2e96ea4
feat: implement MCP server for code intelligence integration with Cla…
deanq Jan 28, 2026
7a73745
chore: add MCP server configuration
deanq Jan 28, 2026
8284f2c
chore: allow PSF-2.0 license for MCP SDK dependencies
deanq Jan 28, 2026
ca71056
chore: expand allowed licenses for MCP and common dev dependencies
deanq Jan 28, 2026
23b0a14
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 28, 2026
efd9e26
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 28, 2026
affc5cc
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 29, 2026
3b707ab
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 29, 2026
b0e4855
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 29, 2026
daea6fa
fix: address PR #158 code review feedback
deanq Jan 29, 2026
6163131
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 30, 2026
20fc01b
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 30, 2026
e206a31
feat: implement AST-based code intelligence indexing system
deanq Jan 28, 2026
f3ff0f3
feat: implement MCP server for code intelligence integration with Cla…
deanq Jan 28, 2026
598649b
chore: add MCP server configuration
deanq Jan 28, 2026
bfdda55
chore: allow PSF-2.0 license for MCP SDK dependencies
deanq Jan 28, 2026
d80e36e
chore: expand allowed licenses for MCP and common dev dependencies
deanq Jan 28, 2026
e872d39
refactor: rename package from tetra_rp to runpod_flash
deanq Jan 29, 2026
d346ca2
refactor: rename internal tetra references to flash
deanq Jan 29, 2026
45d1aba
Merge branch 'deanq/ae-1923-code-intel-mcp-skill' into deanq/ae-1522-…
deanq Jan 30, 2026
0558c1c
Merge branch 'main' into deanq/ae-1923-code-intel-mcp-skill
deanq Jan 31, 2026
de7d0a5
feat: smart re-indexing and test output parser for code intel MCP
deanq Jan 31, 2026
1312641
feat: enforce MCP tool usage and eliminate bash command alternatives
deanq Jan 31, 2026
514fd68
feat: add Claude Code project-wide permissions and update project config
deanq Jan 31, 2026
8fb189b
build: update to allow-licenses
deanq Jan 31, 2026
733dce9
fix: add BSD license to allowed list for httpx dependency
deanq Jan 31, 2026
a581325
chore: remove dependency-review workflow
deanq Jan 31, 2026
c039993
refactor: rename package from tetra_rp to runpod_flash
deanq Jan 29, 2026
f0884a5
refactor: rename internal tetra references to flash
deanq Jan 29, 2026
adf4b77
feat: implement AST-based code intelligence indexing system
deanq Jan 28, 2026
acafef1
feat: implement MCP server for code intelligence integration with Cla…
deanq Jan 28, 2026
61063fc
fix: address PR #158 code review feedback
deanq Jan 29, 2026
85e38e4
refactor: update documentation and references from tetra_rp to runpod…
deanq Feb 1, 2026
d74aa2b
build: update uv.lock
deanq Feb 1, 2026
fb49d09
refactor: rename environment variables from TETRA_* to FLASH_*
deanq Feb 1, 2026
b4e762f
refactor: rename protobuf package from tetra to flash
deanq Feb 1, 2026
98ca750
docs: update all references from tetra to flash
deanq Feb 1, 2026
94c38f8
chore: update release configuration for runpod-flash
deanq Feb 1, 2026
7775d61
Merge remote-tracking branch 'origin/deanq/ae-1522-rename-tetra-to-fl…
deanq Feb 1, 2026
cdd0617
chore: format
deanq Feb 1, 2026
494c96c
fix: add missing async mock for get_environment_by_name in deploy tests
deanq Feb 1, 2026
cbf6b76
refactor: update resource tracking path from .tetra_resources.pkl to …
deanq Feb 1, 2026
104392a
docs: complete tetra-to-flash rename and remove migration artifacts
deanq Feb 1, 2026
f5581da
dev: CLAUDE.md ensures the use of mcp code intel for code exploration
deanq Feb 2, 2026
d3f7a13
Merge branch 'deanq/ae-1923-code-intel-mcp-skill' into deanq/ae-1522-…
deanq Feb 2, 2026
238c1aa
merge: integrate main branch into deanq/ae-1522-rename-tetra-to-flash
deanq Feb 3, 2026
bccfb3f
Merge branch 'main' into deanq/ae-1522-rename-tetra-to-flash
deanq Feb 3, 2026
936bc6f
refactor: rename package and constants from tetra to flash (phase 1-4)
deanq Feb 3, 2026
ef2eba0
refactor: update test suite and docstrings for runpod-flash rename (p…
deanq Feb 3, 2026
23a2743
fix: update test assertions for tetra-rp image names
deanq Feb 3, 2026
d13a751
refactor: update Makefile and scripts for runpod-flash rename (phase 6)
deanq Feb 3, 2026
1339d88
refactor: update template files for runpod-flash rename (phase 8)
deanq Feb 3, 2026
d420e85
docs: update all documentation for runpod-flash rename (phase 9)
deanq Feb 3, 2026
2411db0
fix: code formatting fixes
deanq Feb 3, 2026
3cefe3a
refactor: update Docker images from tetra-rp to flash (phase 10)
deanq Feb 3, 2026
aacadf4
refactor: final cleanup of tetra references (phase 11)
deanq Feb 3, 2026
3ec5724
refactor: update config and docs for flash-code-intel (phase 11.5)
deanq Feb 3, 2026
89a296b
docs: update worker-tetra references to worker-flash
deanq Feb 3, 2026
711a9f7
docs: runpod_flash references
deanq Feb 3, 2026
d5023d8
Merge branch 'main' into deanq/ae-1522-rename-tetra-to-flash
deanq Feb 4, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
14 changes: 7 additions & 7 deletions .claude/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,16 @@
},
"enableAllProjectMcpServers": true,
"enabledMcpjsonServers": [
"tetra-code-intel"
"flash-code-intel"
],
"permissions": {
"allow": [
"mcp__tetra-code-intel__find_by_decorator",
"mcp__tetra-code-intel__find_symbol",
"mcp__tetra-code-intel__get_class_interface",
"mcp__tetra-code-intel__list_classes",
"mcp__tetra-code-intel__list_file_symbols",
"mcp__tetra-code-intel__parse_test_output",
"mcp__flash-code-intel__find_by_decorator",
"mcp__flash-code-intel__find_symbol",
"mcp__flash-code-intel__get_class_interface",
"mcp__flash-code-intel__list_classes",
"mcp__flash-code-intel__list_file_symbols",
"mcp__flash-code-intel__parse_test_output",
"Bash(./scripts/validate-wheel.sh:*)",
"Bash(git diff:*)",
"Bash(git fetch:*)",
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Tetra-rp Framework Explorer Skill
# Flash Framework Explorer Skill

## When to Use

Use this skill when:
- Exploring the tetra-rp framework codebase
- Exploring the runpod-flash framework codebase
- Understanding class hierarchies and relationships
- Finding where methods or classes are defined
- Checking what decorators are used in the codebase
Expand Down Expand Up @@ -74,7 +74,7 @@ After running tests (`make test-unit`, `make test`, `pytest`), **ALWAYS use `par
3. Only read the full file if you need implementation details

**Bad - Reading files directly:**
1. Read entire `src/tetra_rp/core/resources/serverless.py` (500+ tokens)
1. Read entire `src/runpod_flash/core/resources/serverless.py` (500+ tokens)
2. Search manually for ServerlessEndpoint

**Good - Parse test output:**
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/release-please.yml
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ jobs:

environment:
name: pypi-production
url: https://pypi.org/project/tetra-rp/
url: https://pypi.org/project/runpod-flash/

steps:
- name: Checkout code
Expand Down
3 changes: 2 additions & 1 deletion .mcp.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"mcpServers": {
"tetra-code-intel": {
"flash-code-intel": {
"command": "uv",
"args": [
"run",
Expand All @@ -11,3 +11,4 @@
}
}
}

2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.25.2"
".": "0.25.1"
}
353 changes: 169 additions & 184 deletions CHANGELOG.md

Large diffs are not rendered by default.

14 changes: 7 additions & 7 deletions CLAUDE.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# tetra-rp Project Configuration
# runpod-flash Project Configuration

## Claude Code Configuration

When using Claude Code on this project, always prefer the tetra-code-intel MCP tools for code exploration instead of using Explore agents or generic search:
When using Claude Code on this project, always prefer the flash-code-intel MCP tools for code exploration instead of using Explore agents or generic search:

- `mcp__tetra-code-intel__find_symbol` - Search for classes, functions, methods by name
- `mcp__tetra-code-intel__get_class_interface` - Inspect class methods and properties
- `mcp__tetra-code-intel__list_file_symbols` - View file structure without reading full content
- `mcp__tetra-code-intel__list_classes` - Explore the class hierarchy
- `mcp__tetra-code-intel__find_by_decorator` - Find decorated items (e.g., `@property`, `@remote`)
- `mcp__flash-code-intel__find_symbol` - Search for classes, functions, methods by name
- `mcp__flash-code-intel__get_class_interface` - Inspect class methods and properties
- `mcp__flash-code-intel__list_file_symbols` - View file structure without reading full content
- `mcp__flash-code-intel__list_classes` - Explore the class hierarchy
- `mcp__flash-code-intel__find_by_decorator` - Find decorated items (e.g., `@property`, `@remote`)

**Do NOT** use the Task tool with the "Explore" subagent for codebase exploration. Use the MCP tools directly instead.
16 changes: 8 additions & 8 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Contributing to tetra-rp
# Contributing to runpod-flash

## Development Setup

Expand All @@ -12,8 +12,8 @@

```bash
# Clone the repository
git clone https://github.com/your-org/tetra-rp.git
cd tetra-rp
git clone https://github.com/your-org/runpod-flash.git
cd runpod-flash

# Install dependencies and package in editable mode
make dev
Expand Down Expand Up @@ -278,7 +278,7 @@ make quality-check

**Coverage threshold failures**
- Use `--no-cov` flag: `uv run pytest tests/unit/test_file.py -v --no-cov`
- Or test specific module: `uv run pytest --cov=src/tetra_rp/module`
- Or test specific module: `uv run pytest --cov=src/runpod_flash/module`

## Release Process

Expand Down Expand Up @@ -310,7 +310,7 @@ When reviewing code, consider:

## Getting Help

- Check existing [Issues](https://github.com/your-org/tetra-rp/issues)
- Check existing [Issues](https://github.com/your-org/runpod-flash/issues)
- Review [README.md](README.md) for usage examples
- See [TESTING.md](TESTING.md) for testing details
- See [RELEASE_SYSTEM.md](RELEASE_SYSTEM.md) for release process
Expand Down Expand Up @@ -354,7 +354,7 @@ uv run python scripts/code_intel.py interface LiveServerless
**List symbols in a file:**

```bash
uv run python scripts/code_intel.py file tetra_rp/decorators.py
uv run python scripts/code_intel.py file runpod_flash/decorators.py
```

**List all symbols:**
Expand All @@ -373,7 +373,7 @@ Claude Code automatically uses the MCP code intelligence server when exploring t
- **85% token reduction**: No need to read full files for structure queries
- **Instant results**: Direct database queries instead of file parsing

The MCP server is configured in `.mcp.json` and automatically activated when you open this project in Claude Code. Use the `/tetra-explorer` skill to get guidance on best exploration practices.
The MCP server is configured in `.mcp.json` and automatically activated when you open this project in Claude Code. Use the `/flash-explorer` skill to get guidance on best exploration practices.

Available MCP tools:
- `find_symbol` - Search for classes, functions, methods
Expand Down Expand Up @@ -405,7 +405,7 @@ uv run python scripts/code_intel.py interface <ClassName>
```bash
# Instead of reading full file (500+ tokens):
# Do this query first (50 tokens):
uv run python scripts/code_intel.py file tetra_rp/decorators.py
uv run python scripts/code_intel.py file runpod_flash/decorators.py

# Then only read full file if implementation details needed
```
Expand Down
12 changes: 6 additions & 6 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,11 @@ test-integration-serial: # Run integration tests serially (for debugging)
uv run pytest tests/integration/ -v -m integration

test-coverage: # Run tests with coverage report (parallel by default)
uv run pytest tests/ -v -n auto -m "not serial" --cov=tetra_rp --cov-report=xml
uv run pytest tests/ -v -m "serial" --cov=tetra_rp --cov-append --cov-report=term-missing
uv run pytest tests/ -v -n auto -m "not serial" --cov=runpod_flash --cov-report=xml
uv run pytest tests/ -v -m "serial" --cov=runpod_flash --cov-append --cov-report=term-missing

test-coverage-serial: # Run tests with coverage report (serial execution)
uv run pytest tests/ -v --cov=tetra_rp --cov-report=term-missing
uv run pytest tests/ -v --cov=runpod_flash --cov-report=term-missing

test-fast: # Run tests with fast-fail mode and parallel execution
uv run pytest tests/ -v -x --tb=short -n auto
Expand Down Expand Up @@ -116,8 +116,8 @@ ci-quality-github: # Quality checks with GitHub Actions formatting (parallel by
uv run ruff check . --output-format=github
@echo "::endgroup::"
@echo "::group::Test suite with coverage"
uv run pytest tests/ --junitxml=pytest-results.xml -v -n auto -m "not serial" --cov=tetra_rp --cov-report=xml --cov-fail-under=0
uv run pytest tests/ --junitxml=pytest-results.xml -v -m "serial" --cov=tetra_rp --cov-append --cov-report=term-missing
uv run pytest tests/ --junitxml=pytest-results.xml -v -n auto -m "not serial" --cov=runpod_flash --cov-report=xml --cov-fail-under=0
uv run pytest tests/ --junitxml=pytest-results.xml -v -m "serial" --cov=runpod_flash --cov-append --cov-report=term-missing
@echo "::endgroup::"

ci-quality-github-serial: # Serial quality checks for GitHub Actions (for debugging)
Expand All @@ -128,5 +128,5 @@ ci-quality-github-serial: # Serial quality checks for GitHub Actions (for debugg
uv run ruff check . --output-format=github
@echo "::endgroup::"
@echo "::group::Test suite with coverage (serial)"
uv run pytest tests/ --junitxml=pytest-results.xml -v --cov=tetra_rp --cov-report=term-missing
uv run pytest tests/ --junitxml=pytest-results.xml -v --cov=runpod_flash --cov-report=term-missing
@echo "::endgroup::"
47 changes: 14 additions & 33 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,3 @@
# ⚠️ DEPRECATED: tetra-rp

> **This package is deprecated.** The final release of `tetra-rp` is version 0.25.2. All future development and releases will be under the new package name `runpod-flash`.
>
> **Migration Required:** To upgrade, install the new package and update your imports:
> ```bash
> pip uninstall tetra-rp
> pip install runpod-flash
> ```
> Then update your imports from `from tetra_rp import ...` to `from runpod_flash import ...`
>
> See [runpod/flash](https://github.com/runpod/flash) for the new repository.

# Flash: Serverless computing for AI workloads

Runpod Flash is a Python SDK that streamlines the development and deployment of AI workflows on Runpod's [Serverless infrastructure](http://docs.runpod.io/serverless/overview). Write Python functions locally, and Flash handles the infrastructure, provisioning GPUs and CPUs, managing dependencies, and transferring data, allowing you to focus on building AI applications.
Expand Down Expand Up @@ -56,14 +43,8 @@ Before you can use Flash, you'll need:

### Step 1: Install Flash

> **Note:** This documentation describes the deprecated `tetra-rp` package. For new projects, use `runpod-flash`:
> ```bash
> pip install runpod-flash
> ```

For the legacy package:
```bash
pip install tetra_rp
pip install runpod-flash
```

### Step 2: Set your API key
Expand All @@ -86,7 +67,7 @@ Add the following code to a new Python file:

```python
import asyncio
from tetra_rp import remote, LiveServerless
from runpod_flash import remote, LiveServerless
from dotenv import load_dotenv

# Uncomment if using a .env file
Expand Down Expand Up @@ -310,7 +291,7 @@ async def main():
Flash provides fine-grained control over hardware allocation through configuration objects:

```python
from tetra_rp import LiveServerless, GpuGroup, CpuInstanceType, PodTemplate
from runpod_flash import LiveServerless, GpuGroup, CpuInstanceType, PodTemplate

# GPU configuration
gpu_config = LiveServerless(
Expand Down Expand Up @@ -364,7 +345,7 @@ results = await asyncio.gather(
For API endpoints requiring low-latency HTTP access with direct routing, use load-balanced endpoints:

```python
from tetra_rp import LiveLoadBalancer, remote
from runpod_flash import LiveLoadBalancer, remote

api = LiveLoadBalancer(name="api-service")

Expand Down Expand Up @@ -413,7 +394,7 @@ Flash orchestrates workflow execution through a sophisticated multi-step process
`LiveServerless` resources use a fixed Docker image that's optimized for Flash runtime, and supports full remote code execution. For specialized environments that require a custom Docker image, use `ServerlessEndpoint` or `CpuServerlessEndpoint`:

```python
from tetra_rp import ServerlessEndpoint
from runpod_flash import ServerlessEndpoint

custom_gpu = ServerlessEndpoint(
name="custom-ml-env",
Expand Down Expand Up @@ -525,7 +506,7 @@ For information on load-balanced endpoints (required for Mothership and HTTP ser

RunPod serverless has a **500MB deployment limit**. Exceeding this limit will cause deployment failures.

Use `--exclude` to skip packages already in your worker-tetra Docker image:
Use `--exclude` to skip packages already in your worker-flash Docker image:

```bash
# For GPU deployments (PyTorch pre-installed)
Expand All @@ -539,7 +520,7 @@ flash build --exclude torch,torchvision,torchaudio
- **CPU resources** → Python slim images have NO ML frameworks pre-installed
- **Load-balanced** → Same as above, depends on GPU vs CPU variant

See [worker-tetra](https://github.com/runpod-workers/worker-tetra) for base image details.
See [worker-flash](https://github.com/runpod-workers/worker-flash) for base image details.

## Configuration

Expand Down Expand Up @@ -614,7 +595,7 @@ Some common GPU groups available through `GpuGroup`:

```python
import asyncio
from tetra_rp import remote, LiveServerless
from runpod_flash import remote, LiveServerless

# Simple GPU configuration
gpu_config = LiveServerless(name="example-gpu-server")
Expand Down Expand Up @@ -653,7 +634,7 @@ if __name__ == "__main__":

```python
import asyncio
from tetra_rp import remote, LiveServerless, GpuGroup, PodTemplate
from runpod_flash import remote, LiveServerless, GpuGroup, PodTemplate
import base64

# Advanced GPU configuration with consolidated template overrides
Expand Down Expand Up @@ -708,7 +689,7 @@ if __name__ == "__main__":

```python
import asyncio
from tetra_rp import remote, LiveServerless, CpuInstanceType
from runpod_flash import remote, LiveServerless, CpuInstanceType

# Simple CPU configuration
cpu_config = LiveServerless(
Expand Down Expand Up @@ -756,7 +737,7 @@ if __name__ == "__main__":
```python
import asyncio
import base64
from tetra_rp import remote, LiveServerless, CpuInstanceType, PodTemplate
from runpod_flash import remote, LiveServerless, CpuInstanceType, PodTemplate

# Advanced CPU configuration with template overrides
data_processing_config = LiveServerless(
Expand Down Expand Up @@ -833,7 +814,7 @@ if __name__ == "__main__":

```python
import asyncio
from tetra_rp import remote, LiveServerless, GpuGroup, CpuInstanceType, PodTemplate
from runpod_flash import remote, LiveServerless, GpuGroup, CpuInstanceType, PodTemplate

# GPU configuration for model inference
gpu_config = LiveServerless(
Expand Down Expand Up @@ -948,7 +929,7 @@ if __name__ == "__main__":
```python
import os
import asyncio
from tetra_rp import remote, LiveServerless
from runpod_flash import remote, LiveServerless

# Configure Runpod resources
runpod_config = LiveServerless(name="multi-stage-pipeline-server")
Expand Down Expand Up @@ -1107,6 +1088,6 @@ def fetch_data(url):
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

<p align="center">
<a href="https://github.com/runpod/tetra-rp">Flash</a> •
<a href="https://github.com/runpod/runpod-flash">Flash</a> •
<a href="https://runpod.io">Runpod</a>
</p>
2 changes: 1 addition & 1 deletion RELEASE_SYSTEM.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Overview

The tetra-rp project uses a simple, reliable release automation system built on **Release Please v4** with quality gates and automated PyPI publishing via OIDC trusted publishing.
The runpod-flash project uses a simple, reliable release automation system built on **Release Please v4** with quality gates and automated PyPI publishing via OIDC trusted publishing.

## Architecture

Expand Down
6 changes: 3 additions & 3 deletions TESTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ uv pip install -e .

# Development (complete - includes packaging validation)
# Build wheel and install it to test CLI from anywhere
cd /path/to/tetra-rp && uv build && pip install dist/tetra_rp-*.whl --force-reinstall
cd /path/to/runpod_flash && uv build && pip install dist/runpod_flash-*.whl --force-reinstall
cd /tmp && flash init test_project

# Unit tests
Expand Down Expand Up @@ -38,11 +38,11 @@ make validate-wheel

**Coverage threshold failures**
- Use `--no-cov` flag for focused testing: `uv run pytest tests/unit/test_skeleton.py -v --no-cov`
- Or test specific module: `uv run pytest --cov=src/tetra_rp/cli/utils/skeleton`
- Or test specific module: `uv run pytest --cov=src/runpod_flash/cli/utils/skeleton`

**Hidden files require explicit glob patterns**
- Pattern `**/.*` needed in pyproject.toml to include `.env`, `.gitignore`, `.flashignore`
- Verify with: `unzip -l dist/tetra_rp-*.whl | grep skeleton_template`
- Verify with: `unzip -l dist/runpod_flash-*.whl | grep skeleton_template`

## Pre-Release Checklist

Expand Down
Loading
Loading