Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 29 additions & 22 deletions .github/workflows/e2e_tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -93,44 +93,44 @@ jobs:

- name: Select and configure run.yaml
env:
CONFIG_MODE: ${{ matrix.mode }}
CONFIG_ENVIRONMENT: ${{ matrix.environment || 'ci' }}
run: |
CONFIGS_DIR="tests/e2e/configs"
MODE="$CONFIG_MODE"
ENVIRONMENT="$CONFIG_ENVIRONMENT"

echo "Deployment mode: $MODE"
echo "Looking for configurations in $CONFIGS_DIR/"

# Select config based on mode:
# - library mode: run-library.yaml (llama-stack 0.3.0 format)
# - server mode: run-ci.yaml (original format)
if [ "$MODE" == "library" ]; then
CONFIG_FILE="$CONFIGS_DIR/run-library.yaml"
# List available configurations
if [ -d "$CONFIGS_DIR" ]; then
echo "Available configurations:"
ls -la "$CONFIGS_DIR"/*.yaml 2>/dev/null || echo "No YAML files found in $CONFIGS_DIR/"
else
CONFIG_FILE="$CONFIGS_DIR/run-ci.yaml"
echo "Configs directory '$CONFIGS_DIR' not found!"
exit 1
fi

echo "Using configuration: $CONFIG_FILE"
# Determine which config file to use
CONFIG_FILE="$CONFIGS_DIR/run-$ENVIRONMENT.yaml"

echo "Looking for: $CONFIG_FILE"

if [ ! -f "$CONFIG_FILE" ]; then
echo "❌ Configuration not found: $CONFIG_FILE"
echo "Available configs:"
ls -la "$CONFIGS_DIR"/*.yaml
if [ -f "$CONFIG_FILE" ]; then
echo "✅ Found config for environment: $ENVIRONMENT"
cp "$CONFIG_FILE" run.yaml
echo "✅ Config copied to run.yaml"
else
echo "❌ Configuration file not found: $CONFIG_FILE"
echo "Available files in $CONFIGS_DIR:"
ls -la "$CONFIGS_DIR/"
exit 1
fi
Comment on lines +96 to 126
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Config selection is correct; fix hard‑coded source path in summary

The new CONFIG_ENVIRONMENT / run-$ENVIRONMENT.yaml selection logic is sound and will work for ci and azure environments.

However, the summary step always prints:

echo "Source config: tests/e2e/configs/run-ci.yaml"

so azure runs will be mis‑reported.

Consider deriving this from the matrix value:

-echo "Source config: tests/e2e/configs/run-ci.yaml"
+echo "Source config: tests/e2e/configs/run-${{ matrix.environment }}.yaml"

This keeps logs accurate if more environments are added later.

Also applies to: 133-133

🤖 Prompt for AI Agents
In .github/workflows/e2e_tests.yaml around lines 96-126 (and also line 133), the
summary log prints a hard-coded path ("tests/e2e/configs/run-ci.yaml") which
misreports the actual chosen config for other environments; update the echo to
derive the path from the variables you already set — e.g. use CONFIGS_DIR and
CONFIG_ENVIRONMENT or the CONFIG_FILE variable (echo "Source config:
$CONFIG_FILE" or echo "Source config:
$CONFIGS_DIR/run-$CONFIG_ENVIRONMENT.yaml") so the log reflects the real
selected file for all matrix environments.


cp "$CONFIG_FILE" run.yaml
echo "✅ Configuration copied to run.yaml"

- name: Show final configuration
run: |
echo "=== Configuration Summary ==="
echo "Deployment mode: ${{ matrix.mode }}"
echo "Environment: ${{ matrix.environment }}"
if [ "${{ matrix.mode }}" == "library" ]; then
echo "Source config: tests/e2e/configs/run-library.yaml"
else
echo "Source config: tests/e2e/configs/run-ci.yaml"
fi
echo "Source config: tests/e2e/configs/run-ci.yaml"
echo ""
echo "=== Configuration Preview ==="
echo "Providers: $(grep -c "provider_id:" run.yaml)"
Expand All @@ -139,6 +139,13 @@ jobs:
echo "=== lightspeed-stack.yaml ==="
grep -A 3 "llama_stack:" lightspeed-stack.yaml

- name: Docker Login for quay access
env:
QUAY_ROBOT_USERNAME: ${{ secrets.QUAY_DOWNSTREAM_USERNAME }}
QUAY_ROBOT_TOKEN: ${{ secrets.QUAY_DOWNSTREAM_TOKEN }}
run: |
echo $QUAY_ROBOT_TOKEN | docker login quay.io -u=$QUAY_ROBOT_USERNAME --password-stdin

- name: Run services (Server Mode)
if: matrix.mode == 'server'
env:
Expand Down
7 changes: 7 additions & 0 deletions .github/workflows/e2e_tests_rhaiis.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,13 @@ jobs:
run: |
curl -f ${RHAIIS_URL}:8000/v1/models -H "Authorization: Bearer ${RHAIIS_API_KEY}"

- name: Docker Login for quay access
env:
QUAY_ROBOT_USERNAME: ${{ secrets.QUAY_ROBOT_USERNAME }}
QUAY_ROBOT_TOKEN: ${{ secrets.QUAY_ROBOT_TOKEN }}
run: |
echo $QUAY_ROBOT_TOKEN | docker login quay.io -u=$QUAY_ROBOT_USERNAME --password-stdin

- name: Run service manually
run: |
docker compose version
Expand Down
7 changes: 7 additions & 0 deletions .github/workflows/e2e_tests_rhelai.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,13 @@ jobs:
echo $RHEL_AI_MODEL
curl -f ${RHEL_AI_URL}:${RHEL_AI_PORT}/v1/models -H "Authorization: Bearer ${RHEL_AI_API_KEY}"

- name: Docker Login for quay access
env:
QUAY_ROBOT_USERNAME: ${{ secrets.QUAY_ROBOT_USERNAME }}
QUAY_ROBOT_TOKEN: ${{ secrets.QUAY_ROBOT_TOKEN }}
run: |
echo $QUAY_ROBOT_TOKEN | docker login quay.io -u=$QUAY_ROBOT_USERNAME --password-stdin

- name: Run service manually
run: |
docker compose version
Expand Down
2 changes: 1 addition & 1 deletion Containerfile
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ USER root
RUN microdnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs jq patch

# Create llama-stack directories for library mode
RUN mkdir -p /opt/app-root/src/.llama/distributions/ollama /opt/app-root/src/.llama/providers.d && \
RUN mkdir -p /opt/app-root/src/.llama/storage /opt/app-root/src/.llama/providers.d && \
chown -R 1001:1001 /opt/app-root/src/.llama
Comment on lines +78 to 79
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for any remaining references to the old storage path structure
rg -n "distributions/ollama" --type yaml --type py

Repository: lightspeed-core/lightspeed-stack

Length of output: 7007


Storage path migration is incomplete and will cause runtime failures.

The change in the Containerfile creates the new .llama/storage directory, but 65+ references to the old .llama/distributions/ollama path remain active across configuration files (tests/, examples/). These files still expect databases and storage at the old location. The migration must include updating all db_path, storage_dir, and related environment variable defaults in YAML configuration files to use .llama/storage instead of .llama/distributions/ollama.

🤖 Prompt for AI Agents
In Containerfile around lines 78-79, the Dockerfile creates .llama/storage but
the project still references the old .llama/distributions/ollama path across
config and example files; update all YAML and env defaults (db_path,
storage_dir, STORAGE_DIR, DB_PATH, etc.) in tests/, examples/, configs/ and any
scripts to point to .llama/storage (or make path configurable via a single env
var), ensure relative paths and permissions remain correct, and run a
project-wide search/replace for ".llama/distributions/ollama" to replace with
".llama/storage", then update any README or docs referencing the old path and
add a compatibility note or symlink creation step if backward compatibility is
required.


# Add executables from .venv to system PATH
Expand Down
6 changes: 6 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -623,6 +623,9 @@ The repository includes production-ready container configurations that support t

### Llama-Stack as Separate Service (Server Mode)

> [!IMPORTANT]
> To pull the downstream llama-stack image, you will need access to the `aipcc` organization in quay.io.

When using llama-stack as a separate service, the existing `docker-compose.yaml` provides the complete setup. This builds two containers for lightspeed core and llama stack.

**Configuration** (`lightspeed-stack.yaml`):
Expand All @@ -639,6 +642,9 @@ In the root of this project simply run:
# Set your OpenAI API key
export OPENAI_API_KEY="your-api-key-here"

# Login to quay.io to access the downstream llama-stack image
# podman login quay.io

# Start both services
podman compose up --build

Expand Down
3 changes: 3 additions & 0 deletions docker-compose.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@ services:
- RHEL_AI_PORT=${RHEL_AI_PORT}
- RHEL_AI_API_KEY=${RHEL_AI_API_KEY}
- RHEL_AI_MODEL=${RHEL_AI_MODEL}
# Enable debug logging if needed
- LLAMA_STACK_LOGGING=${LLAMA_STACK_LOGGING:-}

Comment on lines +26 to +28
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Find references to LLAMA_STACK_LOGGING and locate the containerfile
rg "LLAMA_STACK_LOGGING" -i -A 3 -B 3

Repository: lightspeed-core/lightspeed-stack

Length of output: 939


🏁 Script executed:

# Locate and examine the test.containerfile referenced in docker-compose.yaml
fd "test.containerfile" -x cat {}

Repository: lightspeed-core/lightspeed-stack

Length of output: 574


🏁 Script executed:

# Check for any documentation or README about environment variables
fd -e "md" -e "txt" | head -20 | xargs grep -l "LLAMA_STACK_LOGGING" 2>/dev/null || echo "No docs found with LLAMA_STACK_LOGGING"

Repository: lightspeed-core/lightspeed-stack

Length of output: 114


🏁 Script executed:

# Find and examine run.yaml
fd "run.yaml" -x cat {}

Repository: lightspeed-core/lightspeed-stack

Length of output: 34362


🏁 Script executed:

# Check for any logging-related configuration or comments in the repo
rg "logging|LOGGING" -i -l | head -10 | xargs cat 2>/dev/null | grep -i "logging\|debug" | head -20

Repository: lightspeed-core/lightspeed-stack

Length of output: 794


🏁 Script executed:

# Check README, docs, or scripts that might explain LLAMA_STACK_LOGGING
fd -e "md" -e "sh" -e "py" -e "txt" | xargs grep -l "LLAMA_STACK" 2>/dev/null | head -10

Repository: lightspeed-core/lightspeed-stack

Length of output: 212


🏁 Script executed:

# Look for any CI/CD or entrypoint scripts that might use or document this
fd -e "sh" -e "py" | xargs grep -B2 -A2 "LLAMA_STACK_LOGGING" 2>/dev/null

Repository: lightspeed-core/lightspeed-stack

Length of output: 60


🏁 Script executed:

# Check if there's a Makefile or setup guide
fd "Makefile\|makefile\|setup\|SETUP\|README" | head -10 | xargs cat 2>/dev/null | grep -i "llama\|logging" | head -20

Repository: lightspeed-core/lightspeed-stack

Length of output: 58


🏁 Script executed:

# Read README to understand LLAMA_STACK_LOGGING context
cat README.md | grep -i "logging\|environment\|variable" -A 3 -B 3 | head -50

Repository: lightspeed-core/lightspeed-stack

Length of output: 1228


🏁 Script executed:

# Check the tutorial script for LLAMA_STACK usage patterns
cat scripts/llama_stack_tutorial.sh | head -100

Repository: lightspeed-core/lightspeed-stack

Length of output: 2830


🏁 Script executed:

# Check constants.py for any logging-related constants
cat src/constants.py

Repository: lightspeed-core/lightspeed-stack

Length of output: 4784


Document expected values for LLAMA_STACK_LOGGING or remove if unused.

LLAMA_STACK_LOGGING is not documented in this repository. The variable is set to potentially empty, and the comment "Enable debug logging if needed" doesn't clarify what values are accepted or whether it's actually used by the llama-stack container. Either add documentation of supported values (e.g., debug|info|warn|error) or confirm this variable is intentionally optional and can be omitted entirely if not set in the host environment.

🤖 Prompt for AI Agents
In docker-compose.yaml around lines 26 to 28, the LLAMA_STACK_LOGGING env var is
undocumented and ambiguous; either document its accepted values or remove it if
unused — update the file to one of these: a) replace the placeholder comment
with a clear comment listing supported values (e.g.,
LLAMA_STACK_LOGGING=debug|info|warn|error) and add that the default is
empty/unspecified, or b) if the llama-stack image does not consume this
variable, remove the LLAMA_STACK_LOGGING line entirely from the service
definition and any related docs; ensure any chosen option is reflected in repo
README or an environment variable example file.

networks:
- lightspeednet
healthcheck:
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -28,8 +28,8 @@ dependencies = [
# Used by authentication/k8s integration
"kubernetes>=30.1.0",
# Used to call Llama Stack APIs
"llama-stack==0.3.0",
"llama-stack-client==0.3.0",
"llama-stack==0.3.4",
"llama-stack-client==0.3.4",
# Used by Logger
"rich>=14.0.0",
# Used by JWK token auth handler
Expand Down
Loading
Loading