Skip to content

Conversation

@are-ces
Copy link
Contributor

@are-ces are-ces commented Dec 16, 2025

Description

Bumped up llama-stack to 0.3.5 as it has been updated downstream.

Added missing files needed for e2e tests to work properly (they were broken).

Added refresh tag to makefile for konflux requirements generation.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement

Tools used to create PR

NA

Related Tickets & Documents

  • Related Issue #
  • Closes #

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

Check e2e tests in GH Actions

Summary by CodeRabbit

Release Notes

  • Chores

    • Updated core dependencies (llama-stack and llama-stack-client) to version 0.3.5 for improved stability and compatibility.
    • Enhanced build process with improved dependency resolution.
  • Tests

    • Added new test configurations for operational mode verification.
    • Updated test expectations to reflect latest version compatibility.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 16, 2025

Walkthrough

Bumps llama-stack and llama-stack-client dependencies from 0.3.4 to 0.3.5, regenerates hermetic and torch requirements files with --refresh flag, updates the supported version constant, adds new e2e configuration files for library and server modes, and updates test assertions to reflect the new version.

Changes

Cohort / File(s) Summary
Dependency Version Bump
pyproject.toml, src/constants.py, tests/e2e/features/info.feature
Updated llama-stack and llama-stack-client from 0.3.4 to 0.3.5; bumped MAXIMAL_SUPPORTED_LLAMA_STACK_VERSION constant; updated info endpoint test assertion.
Build Configuration
Makefile
Added --refresh flag to pip compile commands for konflux-requirements and torch requirements generation; normalized spacing in torch command.
Regenerated Requirement Files
requirements.aarch64.txt, requirements.x86_64.txt, requirements.torch.txt
Updated command headers to include --refresh flag; aarch64 and x86_64 variants include bumped llama-stack and llama-stack-client to 0.3.5 with updated hashes.
New E2E Test Configuration
tests/e2e/configuration/lightspeed-stack-library-mode.yaml, tests/e2e/configuration/lightspeed-stack-server-mode.yaml
Added new configuration files for Lightspeed Core Service in library-client and server modes with server settings, client connectivity, and user data collection configuration.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

  • Verify llama-stack 0.3.5 compatibility and absence of breaking changes
  • Confirm all three regenerated requirement files (aarch64, x86_64, torch) have correct hashes and versions
  • Validate new YAML configuration files for correctness and proper structure
  • Ensure test assertion for info endpoint matches the expected server response

Possibly related PRs

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Bumpup llama-stack 0.3.5' accurately describes the main change: upgrading llama-stack from 0.3.4 to 0.3.5, which is reflected across multiple files including dependencies, constants, and test configurations.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5e66f50 and bc9deeb.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (9)
  • Makefile (1 hunks)
  • pyproject.toml (1 hunks)
  • requirements.aarch64.txt (2 hunks)
  • requirements.torch.txt (1 hunks)
  • requirements.x86_64.txt (2 hunks)
  • src/constants.py (1 hunks)
  • tests/e2e/configuration/lightspeed-stack-library-mode.yaml (1 hunks)
  • tests/e2e/configuration/lightspeed-stack-server-mode.yaml (1 hunks)
  • tests/e2e/features/info.feature (1 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
tests/e2e/features/**/*.feature

📄 CodeRabbit inference engine (CLAUDE.md)

Use behave (BDD) framework with Gherkin feature files for end-to-end tests

Files:

  • tests/e2e/features/info.feature
pyproject.toml

📄 CodeRabbit inference engine (CLAUDE.md)

pyproject.toml: Configure pylint with source-roots = "src"
Exclude src/auth/k8s.py from pyright type checking

Files:

  • pyproject.toml
src/**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

src/**/*.py: Use absolute imports for internal modules in LCS project (e.g., from auth import get_auth_dependency)
All modules must start with descriptive docstrings explaining their purpose
Use logger = logging.getLogger(__name__) pattern for module logging
All functions must include complete type annotations for parameters and return types, using modern syntax (str | int) and Optional[Type] or Type | None
All functions must have docstrings with brief descriptions following Google Python docstring conventions
Function names must use snake_case with descriptive, action-oriented names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead of modifying input parameters
Use async def for I/O operations and external API calls
All classes must include descriptive docstrings explaining their purpose following Google Python docstring conventions
Class names must use PascalCase with descriptive names and standard suffixes: Configuration for config classes, Error/Exception for exceptions, Resolver for strategy patterns, Interface for abstract base classes
Abstract classes must use ABC with @abstractmethod decorators
Include complete type annotations for all class attributes in Python classes
Use import logging and module logger pattern with standard log levels: debug, info, warning, error

Files:

  • src/constants.py
src/**/constants.py

📄 CodeRabbit inference engine (CLAUDE.md)

Define shared constants in central constants.py file with descriptive comments

Files:

  • src/constants.py
🧠 Learnings (5)
📚 Learning: 2025-09-02T11:09:40.404Z
Learnt from: radofuchs
Repo: lightspeed-core/lightspeed-stack PR: 485
File: tests/e2e/features/environment.py:87-95
Timestamp: 2025-09-02T11:09:40.404Z
Learning: In the lightspeed-stack e2e tests, noop authentication tests use the default lightspeed-stack.yaml configuration, while noop-with-token tests use the Authorized tag to trigger a config swap to the specialized noop-with-token configuration file.

Applied to files:

  • tests/e2e/configuration/lightspeed-stack-library-mode.yaml
  • tests/e2e/configuration/lightspeed-stack-server-mode.yaml
📚 Learning: 2025-08-18T10:58:14.951Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.951Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.

Applied to files:

  • requirements.x86_64.txt
  • requirements.aarch64.txt
📚 Learning: 2025-08-18T10:57:39.266Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.266Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.

Applied to files:

  • requirements.x86_64.txt
  • requirements.aarch64.txt
📚 Learning: 2025-08-18T10:56:55.349Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.349Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.

Applied to files:

  • requirements.x86_64.txt
  • requirements.aarch64.txt
📚 Learning: 2025-08-18T11:45:59.961Z
Learnt from: matysek
Repo: lightspeed-core/lightspeed-stack PR: 292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.961Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.

Applied to files:

  • requirements.torch.txt
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: build-pr
  • GitHub Check: E2E: server mode / ci
  • GitHub Check: E2E: library mode / ci
  • GitHub Check: E2E: server mode / azure
  • GitHub Check: E2E: library mode / azure
🔇 Additional comments (10)
tests/e2e/features/info.feature (1)

19-19: Info endpoint assertion matches llama-stack 0.3.5 bump

The new expectation aligns with the dependency and constants bump; just ensure the deployed llama-stack instance used in e2e actually reports 0.3.5 so this doesn’t become brittle.

pyproject.toml (1)

31-32: llama-stack(‑client) pins to 0.3.5 are consistent

The pins match the updated MAXIMAL_SUPPORTED_LLAMA_STACK_VERSION and the e2e expectations; this keeps central dependency metadata consistent. Please just confirm that all lock/requirements files and images were regenerated with this version and that e2e/server integration tests pass end‑to‑end.

tests/e2e/configuration/lightspeed-stack-server-mode.yaml (1)

1-20: Server‑mode e2e config looks consistent with existing schema

The sections (service, llama_stack, user_data_collection, authentication) and values are sensible for a test server‑mode setup (noop auth, external llama-stack URL, test API key). Please verify that tests/e2e/features/environment.py (or equivalent) is wired to select this server‑mode config in the intended scenarios so it doesn’t conflict with the existing noop / noop‑with‑token config swap logic. Based on learnings, this wiring is tag‑driven.

src/constants.py (1)

5-5: MAXIMAL_SUPPORTED_LLAMA_STACK_VERSION correctly bumped to 0.3.5

This keeps the allowed range in sync with the project’s pinned llama-stack version and the e2e expectations. Please double‑check any version‑validation or error‑message paths that consume this constant so they’re still accurate for 0.3.5.

tests/e2e/configuration/lightspeed-stack-library-mode.yaml (1)

1-19: Library‑mode e2e config mirrors server‑mode appropriately

The library‑client settings (use_as_library_client: true with library_client_config_path: run.yaml) plus noop auth look correct for a library‑mode test profile. Please confirm that:

  • run.yaml is present where the service expects it during e2e runs, and
  • the environment/config‑swap logic selects this file for library‑mode tests alongside the existing noop/noop‑with‑token handling. Based on learnings, that selection is tag‑driven in the Behave environment.
requirements.aarch64.txt (1)

2-2: aarch64 requirements regenerated with llama-stack 0.3.5 and --refresh

The uv header and the pinned llama-stack / llama-stack-client entries now reflect 0.3.5, matching pyproject and the constants. This is consistent with the project’s strategy of tightly pinning ML/LLM transitive deps to keep the stack reproducible. Based on learnings, this pinning approach is intentional; please just ensure the same --refresh regeneration was applied to the other platform/torch requirements so CI images remain in sync.

Also applies to: 1396-1402

requirements.x86_64.txt (2)

2-2: Header generation command now matches konflux target and refresh semantics

The added --refresh flag in the header matches the updated konflux-requirements target and accurately documents how this file should be regenerated. No changes requested.

If you want to double‑check, re-run make konflux-requirements locally and confirm the header is unchanged afterwards.


1396-1402: llama-stack / llama-stack-client pins are consistent with the version bump

Both core dependencies are correctly bumped to 0.3.5 with hashes in requirements.x86_64.txt (lines 1396-1402). Repo-wide verification confirms the version bump is complete: no lingering 0.3.4 references exist, and 0.3.5 is properly updated in pyproject.toml (dependencies), src/constants.py (version constant), and tests/e2e/features/info.feature (test expectations). No issues.

requirements.torch.txt (1)

2-2: Torch requirements header correctly documents refreshed generation command

The header now shows the uv pip compile - -o requirements.torch.txt ... --refresh invocation, matching the Makefile pipeline and preserving the stdin - convention. No further changes needed.

When you next bump TORCH_VERSION, re-run make konflux-requirements and ensure this header (and the torch==...+cpu pin) still line up with the Makefile.

Makefile (1)

112-113: konflux-requirements target and refresh semantics look consistent

Adding --refresh to all three uv pip compile invocations (x86_64, aarch64, and torch) makes the generation behavior explicit and matches the regenerated headers in the requirements files. The spacing fix around compile - -o is also correct for stdin input.

Please run make konflux-requirements once on this branch to ensure all three requirements files regenerate cleanly and that CI (especially e2e) uses the refreshed pins as expected.

Also applies to: 116-116


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@are-ces are-ces changed the title Bumpup lls 0.3.5 Bumpup llama-stack 0.3.5 Dec 16, 2025
Copy link
Contributor

@radofuchs radofuchs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@radofuchs radofuchs merged commit e93f5d9 into lightspeed-core:main Dec 16, 2025
19 of 25 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants