-
Notifications
You must be signed in to change notification settings - Fork 63
LCORE-497, LCORE-498, LCORE-499: Bundle most of ansible, assist-installer, default run.yaml dependencies, with pytorch and libs that depends on it. #292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughBuilder stage switched to Changes
Sequence Diagram(s)sequenceDiagram
participant CI as CI / Developer
participant UV as uv (dependency manager)
participant Builder as Builder (ubi9/python-312)
participant Final as Runtime (ubi9/python-312-minimal)
CI->>UV: uv sync --group llslibdev
CI->>Builder: build -> install gcc, pip install uv==0.8.11, uv sync --group llslibdev
Builder->>Final: COPY --from=builder --chown=1001:1001 /app-root /app-root
Final->>Final: pip install uv==0.8.11, set PYTHON envs, update PATH, expose 8080, ENTRYPOINT
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@ldjebran FYI. We currently have a dependency on This PR contains a solution to only include the absolutely necessary dependencies. Perhaps you could try to include similar in your refactoring? |
|
@matysek I'm not sure adding Ansible LS's dependencies to your base image is a good idea. Other LS teams may need different or we may need to update them in the future. FYI @ldjebran is going to fix my nasty hackery of copying our Hopefully he'll fix the |
|
@matysek Linking your other PR too; for completeness: ansible/ansible-chatbot-stack#71 We managed to fix the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (1)
pyproject.toml (1)
98-137: Consolidate dependency groups and align with base vs. optional stacksYou already use [dependency-groups] for dev and llslibdev. Consider:
- Moving ML/NLP/FAISS/NumPy/Transformers to an ml group.
- Moving telemetry libs to a telemetry group.
- Moving Postgres driver(s) to a db group.
Derived images can opt-in via uv sync --group . This aligns with the comment thread about not forcing all LS dependencies into the base image.
I can open a follow-up PR reshaping groups and updating the Containerfile to sync only core vs. opt-in groups. Interested?
Also applies to: 139-143
🧹 Nitpick comments (3)
Containerfile (1)
25-26: Update stale comment about uv presenceThe comment says “Final image without uv package manager,” but uv is now installed in the final stage. Please update the comment to avoid confusion.
pyproject.toml (2)
72-72: Be cautious pinning numpy to a single patch in the base runtimeNumpy==2.2.6 increases image size and may constrain other libs’ resolvers. If it’s only needed by ML features, moving it to an “ml” group will keep the base lean and flexible.
38-41: Typo in comment“dependecies” → “dependencies”.
Apply this diff:
-# To check llama-stack API provider dependecies: +# To check llama-stack API provider dependencies:
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
Containerfile(1 hunks)pyproject.toml(3 hunks)
🔇 Additional comments (3)
pyproject.toml (3)
25-25: FastAPI bump looks compatible with current stackFastAPI >=0.115.12 with Starlette >=0.47.1 and Uvicorn >=0.34.3 looks consistent. No issues spotted here.
29-29: Keep llama-stack-client in base deps — it's used at runtimeSearch found direct imports in runtime modules (and in tests), so removing it from base deps will break the application. Key locations:
- src/client.py
- src/utils/endpoints.py
- src/utils/types.py
- src/utils/common.py
- src/models/requests.py
- src/app/endpoints/models.py
- src/app/endpoints/query.py
- src/app/endpoints/conversations.py
- src/app/endpoints/streaming_query.py
- tests/unit/app/endpoints/* (multiple test imports)
- uv.lock (package is pinned)
Pyproject snippet (unchanged):
"llama-stack-client==0.2.17",
55-55: No action required — openai==1.99.9 is already locked in uv.lockpyproject.toml does pin "openai==1.99.9", but uv.lock also records that exact version and a repository search shows no direct imports of the openai SDK in source (only provider IDs, docs, tests, and examples). The strict pin is therefore captured by the lockfile and should not cause solver friction.
Relevant locations:
- pyproject.toml — dependency: "openai==1.99.9" (line ~55)
- uv.lock — specifier ==1.99.9 / version = "1.99.9" (locked)
- No occurrences of "import openai" / "from openai" found in src (openai appears as provider IDs, docs, tests, examples)
Snippet:
"openai==1.99.9",
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (3)
pyproject.toml (3)
43-74: Avoid bloating the base runtime: move heavy ML/viz/db/telemetry deps to optional groups or provider-specific imagesThis block adds large, optional stacks (matplotlib/pillow/pandas/scikit-learn, NLP libs, datasets/transformers, telemetry SDKs, FAISS, DB drivers, etc.) directly into the base runtime. It will increase image size, build time, and attack surface. Prior comments raised the same concern; recommending the same here with a narrowed scope: keep core runtime lean and move optional stacks to dependency groups (e.g., ml, nlp, viz, telemetry, db, datasets) or install them only where the corresponding provider/image needs them.
Would you like me to propose a concrete grouping plan (mapping each dependency to a group) based on your providers?
47-47: psycopg2-binary discouraged in production; prefer psycopg v3 or make optionalThe psycopg maintainers advise against psycopg2-binary in production. Either switch to psycopg (v3) prebuilt wheel or move this into a dedicated optional "db" group if Postgres is not always required.
Apply one of these diffs:
-"psycopg2-binary>=2.9.10", +"psycopg[binary]>=3.2.3",Or remove it from top-level and add a db optional group:
- "psycopg2-binary>=2.9.10",And under [dependency-groups] add:
[dependency-groups] db = [ "psycopg[binary]>=3.2.3", ] ```<!-- review_comment_end --> --- `59-59`: **faiss-cpu portability and duplication: make optional or gate by platform** faiss-cpu wheels are not universally available (e.g., non-x86_64) and it's duplicated in llslibdev. Keep it optional or gate by platform to avoid broken builds and reduce base image weight. Minimal changes: ```diff - "faiss-cpu>=1.11.0",- "faiss-cpu>=1.11.0",Then add a dedicated optional group:
[dependency-groups] faiss = [ "faiss-cpu>=1.11.0", ]Alternatively, platform-gate:
- "faiss-cpu>=1.11.0", + "faiss-cpu>=1.11.0; platform_machine == 'x86_64'" ```<!-- review_comment_end --> Also applies to: 130-130 </blockquote></details> </blockquote></details> <details> <summary>🧹 Nitpick comments (3)</summary><blockquote> <details> <summary>pyproject.toml (3)</summary><blockquote> `120-137`: **Remove duplicated deps in llslibdev to avoid version drift with top-level** llslibdev repeats many top-level deps (fastapi, uvicorn, aiosqlite, datasets, sqlalchemy, mcp, opentelemetry SDK/exporter) with different version ranges. This is prone to conflicts/drift. Keep llslibdev additive (only deps not present at top-level), or align versions exactly. Suggested minimal cleanup (keep group focused on dev-only libs): ```diff llslibdev = [ - "fastapi>=0.115.12", - "opentelemetry-sdk>=1.34.0", - "opentelemetry-exporter-otlp>=1.34.0", - "aiosqlite>=0.21.0", - "litellm>=1.72.1", - "uvicorn>=0.34.3", "blobfile>=3.0.0", - "datasets>=3.6.0", - "sqlalchemy>=2.0.41", - "faiss-cpu>=1.11.0", - "mcp>=1.9.4", "autoevals>=0.0.129", "psutil>=7.0.0", "torch>=2.7.1", "peft>=0.15.2", "trl>=0.18.2", ]If you need different versions in llslibdev by design, explicitly justify and align them (e.g., bump top-level or make a separate group to avoid collision).
115-116: Deduplicate build tooling: avoid having build/twine in both dev and build groupsbuild and twine appear in both dev and build groups (and with differing twine versions). Keep them in one place to prevent drift; typically the dedicated build group.
Minimal fix (remove from dev group):
dev = [ "black>=25.1.0", "pytest>=8.3.2", "pytest-cov>=5.0.0", "pytest-mock>=3.14.0", "pytest-asyncio>=1.0.0", "pyright>=1.1.401", "pylint>=3.3.7", "pydocstyle>=6.3.0", "mypy>=1.16.0", "types-PyYAML>=6.0.2", "types-requests>=2.28.0", "ruff>=0.11.13", "aiosqlite", "behave>=1.3.0", "types-cachetools>=6.1.0.20250717", - "build>=1.2.2.post1", - "twine>=6.1.0", "openapi-to-md>=0.1.0b2", ]Optionally, standardize the twine version in the build group if needed after verifying available versions.
Also applies to: 139-143
68-69: Version inconsistency: opentelemetry SDK/exporter differ between top-level and llslibdevTop-level has >=1.34.1, llslibdev has >=1.34.0. Either remove duplicates (preferred) or align versions to the same minimum to avoid confusing resolver behavior and subtle incompatibilities.
If keeping in llslibdev, align:
- "opentelemetry-sdk>=1.34.0", - "opentelemetry-exporter-otlp>=1.34.0", + "opentelemetry-sdk>=1.34.1", + "opentelemetry-exporter-otlp>=1.34.1", ```<!-- review_comment_end --> Also applies to: 121-122 </blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used: CodeRabbit UI** **Review profile: CHILL** **Plan: Pro** **💡 Knowledge Base configuration:** - MCP integration is disabled by default for public repositories - Jira integration is disabled by default for public repositories - Linear integration is disabled by default for public repositories You can enable these sources in your CodeRabbit configuration. <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 84172c333d4bee3ba3388c32c827f848c7df0290 and fd0e2c21d66c5cf4a888f48a0355600568ddcea2. </details> <details> <summary>⛔ Files ignored due to path filters (1)</summary> * `uv.lock` is excluded by `!**/*.lock` </details> <details> <summary>📒 Files selected for processing (1)</summary> * `pyproject.toml` (3 hunks) </details> <details> <summary>⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)</summary> * GitHub Check: build-pr * GitHub Check: e2e_tests </details> <details> <summary>🔇 Additional comments (2)</summary><blockquote> <details> <summary>pyproject.toml (2)</summary> `25-25`: **FastAPI/Starlette/Uvicorn compatibility check** Upgrading fastapi to >=0.115.12 alongside starlette>=0.47.1 and uvicorn>=0.34.3 is likely OK, but please confirm no breaking changes (e.g., lifespan/shutdown semantics, middleware signatures). If you rely on Starlette internals, run the API smoke tests. <!-- review_comment_end --> Also applies to: 33-35 --- `78-86`: **Pyright config addition looks good** Excluding the known-problematic k8s module and setting extraPaths to src is appropriate. <!-- review_comment_end --> <!-- file_end --> </details> </blockquote></details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
pyproject.toml (5)
37-74: Avoid bloating the base image: move heavy/optional libs to dependency groupsThis block adds large, rarely-needed packages (matplotlib/pillow/pandas/scikit-learn, nltk/langdetect/pythainlp/tree_sitter, datasets, opentelemetry SDKs, transformers, faiss-cpu, psycopg2-binary). It significantly increases image size and diverges from earlier concerns about keeping the base generic.
Propose moving these to groups keyed to run.yaml needs (ml, nlp, telemetry, vector, db, viz). Keep base limited to what the default service necessarily imports at runtime. Let derived images opt-in.
Example refactor (illustrative minimal diff):
[project] dependencies = [ - "fire>=0.7.0", - "matplotlib>=3.10.0", - "pillow>=11.1.0", - "pandas>=2.2.3", - "scikit-learn>=1.5.2", - "psycopg2-binary>=2.9.10", - "tree_sitter>=0.24.0", - "pythainlp>=3.0.10", - "langdetect>=1.0.9", - "emoji>=2.1.0", - "nltk>=3.8.1", - "openai==1.99.9", - "litellm>=1.75.5.post1", - "faiss-cpu>=1.11.0", - "requests>=2.32.4", - "sqlalchemy>=2.0.42", - "aiosqlite>=0.21.0", - "datasets>=3.6.0", - "opentelemetry-sdk>=1.34.1", - "opentelemetry-exporter-otlp>=1.34.1", - "transformers>=4.34.0", - "numpy==2.2.6", - "mcp>=1.9.4", + "fire>=0.7.0", + "requests>=2.32.4", + "sqlalchemy>=2.0.42", + "aiosqlite>=0.21.0", + "mcp>=1.9.4", ] [dependency-groups] +ml = [ + "numpy>=2.2,<3.0", + "scikit-learn>=1.5.2,<2.0", + "pandas>=2.2.3,<3.0", + "datasets>=3.6.0,<4.0", + "transformers>=4.34.0,<5.0", +] +nlp = [ + "nltk>=3.8.1,<4.0", + "pythainlp>=3.0.10,<4.0", + "langdetect>=1.0.9,<2.0", + "emoji>=2.1.0,<3.0", + "tree_sitter>=0.24.0,<1.0.0", +] +viz = [ + "matplotlib>=3.10.0,<4.0", + "pillow>=11.1.0,<12.0", +] +telemetry = [ + "opentelemetry-sdk>=1.34.1,<2.0", + "opentelemetry-exporter-otlp>=1.34.1,<2.0", +] +clients = [ + "openai>=1.99.9,<2.0", + "litellm>=1.75.5.post1,<2.0", +] +vector = [ + "faiss-cpu>=1.11.0", +] +db = [ + "psycopg[binary]>=3.2.3", +]Note: tune groups to match the default run.yaml and assisted installer needs.
47-47: psycopg2-binary is discouraged for production; prefer psycopg v3 or make it optionalReplace with psycopg v3 prebuilt wheel or move to a db group to keep base generic.
Suggested change:
-"psycopg2-binary>=2.9.10", +"psycopg[binary]>=3.2.3",Or remove from [project.dependencies] and keep it only under a db dependency group.
59-59: faiss-cpu should be optional or platform-gated; dedupe across groupsTop-level includes faiss-cpu and llslibdev also includes it (Line 130). Wheels are not universal across architectures; make it optional and install only where needed. Also avoid duplication to reduce drift.
Option A: platform-gate at top-level
-"faiss-cpu>=1.11.0", +"faiss-cpu>=1.11.0; platform_machine == 'x86_64'"Option B: move to a dedicated group and remove both top-level and llslibdev entries
- "faiss-cpu>=1.11.0", ... - "faiss-cpu>=1.11.0",and add under [dependency-groups.vector] as shown in a previous comment.
Also applies to: 120-137
111-111: Relax exact pins and unify constraints (numpy, twine, etc.)Exact pin for numpy (Line 72) and conflicting twine bounds can cause resolution churn and unnecessary rebuilds. Prefer compatible ranges in pyproject and rely on uv.lock for reproducibility.
Example:
- Replace numpy==2.2.6 with numpy>=2.2,<3.0
- Use a single twine range across all groups: >=6.1.0,<7.0
Also applies to: 141-141
55-58: Relax exact pins and prefer bounded ranges; rely on uv.lockOpenAI exact pin and several single-sided bounds can be relaxed; use compatible ranges and rely on uv.lock for reproducibility. This reduces churn and resolution conflicts across groups.
Examples:
-"openai==1.99.9", +"openai>=1.99.9,<2.0.0", -"datasets>=3.6.0", +"datasets>=3.6.0,<4.0", -"opentelemetry-sdk>=1.34.1", -"opentelemetry-exporter-otlp>=1.34.1", +"opentelemetry-sdk>=1.34.1,<2.0", +"opentelemetry-exporter-otlp>=1.34.1,<2.0", -"transformers>=4.34.0", +"transformers>=4.34.0,<5.0", -"numpy==2.2.6", +"numpy>=2.2,<3.0",Also applies to: 61-66, 68-74
🧹 Nitpick comments (4)
pyproject.toml (4)
139-142: Unify duplicate “build” tools and version bounds (twine/build duplicated and conflicting)build/twine appear in both dev and build groups with different versions (twine >=6.1.0 vs >=5.1.1). Keep a single source of truth and align versions.
Minimal fix to align on the newer constraint:
[dependency-groups] dev = [ @@ - "build>=1.2.2.post1", - "twine>=6.1.0", + "build>=1.2.2.post1", + "twine>=6.1.0", @@ ] build = [ - "build>=1.2.2.post1", - "twine>=5.1.1", + "build>=1.2.2.post1", + "twine>=6.1.0", ]Alternatively, drop the separate build group and keep these only in dev.
Also applies to: 98-118
112-112: Duplicate aiosqlite declaration; keep it in one place with consistent boundsTop-level has aiosqlite>=0.21.0; dev group lists unpinned aiosqlite. Keep only top-level (or pin dev to same).
Suggested cleanup:
- "aiosqlite",Also applies to: 64-64
121-129: Version drift between top-level and llslibdev group (otel, litellm, sqlalchemy)llslibdev’s versions are older/smaller bounds than top-level:
- otel sdk/exporter: 1.34.0 vs 1.34.1
- litellm: 1.72.1 vs 1.75.5.post1
- sqlalchemy: 2.0.41 vs 2.0.42
Unify to avoid subtle env differences during local/dev workflows.
Example changes:
- "opentelemetry-sdk>=1.34.0", - "opentelemetry-exporter-otlp>=1.34.0", + "opentelemetry-sdk>=1.34.1", + "opentelemetry-exporter-otlp>=1.34.1", - "litellm>=1.72.1", + "litellm>=1.75.5.post1", - "sqlalchemy>=2.0.41", + "sqlalchemy>=2.0.42",Also applies to: 68-69, 125-125, 129-129
163-171: UV PyTorch CPU index: consider moving to derived images/docs to avoid mixed signalsThe base project states “no PyTorch and libs that depend on it”. While this UV config doesn’t install torch by itself, it introduces PyTorch-specific resolution into the base. Recommend moving this to derived images that opt into PyTorch/torchvision, or clearly comment it as for downstream images only.
If you prefer to keep the base pristine:
-[[tool.uv.index]] -name = "pytorch-cpu" -url = "https://download.pytorch.org/whl/cpu" -explicit = true - -[tool.uv.sources] -torchvision = [{ index = "pytorch-cpu" }]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
pyproject.toml(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (3)
pyproject.toml (3)
25-25: FastAPI bump looks goodNo breaking changes known for this range; aligns with recent Starlette/uvicorn ranges already declared.
71-74: No implicit backend imports detectedRan searches across the codebase for any imports or usage of
torch,tensorflow,flax, and directtransformerscalls—no matches were found. There are currently no runtime code paths that would trigger a PyTorch/TF/Flax backend import:
- No
import torch/from torch/AutoModel/pipeline/Trainerusages- No
import tensorfloworimport flax- No direct
import transformersor other Transformers API callsIt’s safe to keep
transformers>=4.34.0as a normal dependency without moving it to an optional extras group.
98-138: No “llslibdev” in base image build
Verified all Containerfile/Dockerfile layers and build steps—onlypip install uvis present. There are nouv sync --group llslibdev,pdm install -G llslibdev, or similar commands in any image build script. The only place “llslibdev” appears is in the documentation example (docs/deployment_guide.md). No action is needed to keep the base image free of PyTorch and related packages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
pyproject.toml (2)
59-59: Make faiss-cpu optional or platform-gated (portability/build-time risk)faiss-cpu wheels aren’t universal across architectures; keeping it at top-level increases base size and can break on non-x86_64. Move to an optional group or gate by platform.
Option A (preferred: make optional):
- "faiss-cpu>=1.11.0",And add:
[dependency-groups.faiss] faiss-cpu = ">=1.11.0"Option B (platform gate in base):
- "faiss-cpu>=1.11.0", + "faiss-cpu>=1.11.0; platform_machine == 'x86_64'",Also consider removing it from llslibdev if not essential there, or gate it similarly:
- "faiss-cpu>=1.11.0", + "faiss-cpu>=1.11.0; platform_machine == 'x86_64'",Also applies to: 144-144
47-47: psycopg2-binary is not recommended for production; switch to psycopg v3 or make optionalMaintainers discourage psycopg2-binary in production. Prefer psycopg (v3) or move Postgres support to an optional group.
Apply one of:
- "psycopg2-binary>=2.9.10", + "psycopg[binary]>=3.2.3",Or remove it from base and add a db group:
- "psycopg2-binary>=2.9.10",[dependency-groups.db] psycopg = ">=3.2.3"
🧹 Nitpick comments (4)
pyproject.toml (4)
70-73: Consider moving Transformers and relaxing the exact numpy pin
- Transformers typically drags in heavyweight inference backends; move it out of the base to keep the image lean unless it’s truly required at runtime.
- Avoid exact pins for core libs in pyproject; rely on the lockfile for reproducibility and keep a compatible range here.
Apply:
- "transformers>=4.34.0", - "numpy==2.2.6", + # transformers is heavy; keep optional unless strictly required at runtime + # "transformers>=4.34.0,<5", + "numpy>=2.2,<3",Optionally add transformers to the same [dependency-groups.ml] suggested earlier.
152-155: Unify and deduplicate build tooling constraints (twine/build duplicated with conflicting constraints)Build tooling appears both in dev and build groups with different version bounds for twine. This is confusing and can lead to accidental divergence.
Proposed cleanup:
- Keep build tools only in one group (dev), and remove the separate build group.
- If you keep the build group, align versions with dev.
Example (remove the build group):
-[dependency-groups] -dev = [ +[dependency-groups] +dev = [ @@ - "build>=1.2.2.post1", - "twine>=6.1.0", + "build>=1.2.2.post1", + "twine>=6.1.0", @@ -] -llslibdev = [ +] +llslibdev = [ @@ -] - -build = [ - "build>=1.2.2.post1", - "twine>=5.1.1", -] +]If you need a separate “build” group for CI, keep it but match versions with dev (>=6.1.0) to avoid conflicts.
Also applies to: 112-132
38-38: Typo: “dependecies” → “dependencies”Minor comment spelling fix.
- # To check llama-stack API provider dependecies: + # To check llama-stack API provider dependencies:
133-150: llslibdev group: sanity check heavy deps and versionsThe dev group includes torch, peft, trl, faiss-cpu, datasets, etc. That’s fine if strictly for local dev of providers. Just ensure:
- torch pin is valid (see previous verification).
- Consider gating faiss-cpu by platform here too.
- If this group is used in CI for non-ML jobs, split into finer groups (e.g., providers, vector, training).
I can propose concrete groupings (providers/viz/db/vector/training/telemetry) if you confirm which CI jobs/images need which subsets.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (1)
pyproject.toml(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (2)
pyproject.toml (2)
25-31: LGTM: FastAPI bump and core service deps look reasonableThe FastAPI bump to >=0.115.12 and keeping uvicorn/starlette aligned is good. Core telemetry/cache/HTTP deps also look fine.
24-80: No runtime ML imports detected insrc/; safe to move ML dependencies to optional groups
The scan showed zero occurrences oftorch,transformers,peft,trl, orfaissimports in your production code (src/**/*.py). You can proceed with removing these from the base dependencies without breaking the core service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
Containerfile (1)
50-54: Pin is good; also target the app venv and mirror UV_ env in the final image*You addressed the pinning; two follow-ups remain from the earlier review: guide derived images to install into the app’s venv and mirror the UV_* env to avoid interpreter downloads. Optionally, skip re-installing uv by copying the binary from the builder.
Apply this diff to improve guidance, reuse the version ARG, and mirror runtime env vars:
-# Add uv to final image for derived images to add additional dependencies -# with command: -# $ uv pip install <dependency> -RUN pip3.12 install "uv==0.8.11" +# Add uv to final image for derived images to add additional dependencies. +# Prefer installing into the app's virtualenv to avoid split-brain installs: +# $ uv pip install --python /app-root/.venv/bin/python <dependency> +RUN pip3.12 install "uv==${UV_VERSION}" +# Mirror uv runtime env to avoid interpreter downloads in derived images. +ENV UV_COMPILE_BYTECODE=0 \ + UV_LINK_MODE=copy \ + UV_PYTHON_DOWNLOADS=0Alternative (smaller layer, avoids needing pip in the final image):
- Remove the pip install of uv above and instead copy the uv binary from the builder:
COPY --from=builder /usr/local/bin/uv /usr/local/bin/uvVerification to consider:
- Confirm pip is available in
ubi9/python-312-minimal. If it’s not, prefer the “copy uv from builder” approach or usepython3.12 -m ensurepipbefore installing uv.- Validate that derived images can successfully run:
uv pip install --python /app-root/.venv/bin/python <dependency>
🧹 Nitpick comments (4)
Containerfile (4)
2-2: Deduplicate uv version across stages with an ARGYou install uv twice. Define a single version knob and reuse it to avoid drift.
Apply this diff to introduce the ARG in the builder stage:
FROM registry.access.redhat.com/ubi9/python-312 AS builder +ARG UV_VERSION=0.8.11
14-18: Clean dnf caches after gcc install to reduce image sizeThe builder layer will retain dnf metadata unless explicitly cleaned. Add cleanup to keep layers lean.
USER root -RUN dnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs gcc +RUN dnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs gcc \ + && dnf -y clean all \ + && rm -rf /var/cache/dnf /var/cache/yum
20-20: Use the version ARG when installing uv in the builder stageKeeps the version centralized and prevents accidental divergence with the final stage.
-RUN pip3.12 install "uv==0.8.11" +RUN pip3.12 install "uv==${UV_VERSION}"
31-31: Mirror the uv version ARG in the final stageARGs don’t persist across stages. Re-declare it in the final image to reuse the same version pin.
FROM registry.access.redhat.com/ubi9/python-312-minimal +ARG UV_VERSION=0.8.11
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
Containerfile(3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (1)
Containerfile (1)
27-27: uv sync with lock is solidUsing
uv sync --locked --no-devensures reproducible installs. Looks good.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to have this dependencies in llslibdev section (only). The service itself does not (AFAIK) have this dependency needs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
pyproject.toml (1)
92-138: Split out PyTorch and dependent training libs from llslibdevTo satisfy “no pytorch and libs that depends on it” while still providing broad library coverage, split training/finetuning packages into a dedicated group not installed in the base image. Specifically move:
- torch==2.7.1
- peft>=0.15.2
- trl>=0.18.2
Optionally keep transformers here (it can be used without torch), or move it too if you want to avoid accidental heavy installs downstream.
Apply this diff to llslibdev and add a new ml-train group:
llslibdev = [ @@ - "torch==2.7.1", - "peft>=0.15.2", - "trl>=0.18.2", ] +[dependency-groups.ml-train] +# Training/finetuning stack (opt-in) +torch = "==2.7.1" +peft = ">=0.15.2" +trl = ">=0.18.2"Then in the Containerfile, avoid installing
ml-train(and llslibdev if it continues to include any PyTorch-dependent libs).
🧹 Nitpick comments (5)
Containerfile (2)
14-18: Tighten DNF install to reduce layer size and avoid weak depsMinor hardening/size improvement for the builder layer:
- Disable weak deps to avoid pulling recommended packages.
- Clean metadata after install (even with keepcache=0 this removes repo metadata).
Apply:
-USER root -RUN dnf install -y --nodocs --setopt=keepcache=0 --setopt=tsflags=nodocs gcc +USER root +RUN dnf install -y --nodocs \ + --setopt=keepcache=0 \ + --setopt=tsflags=nodocs \ + --setopt=install_weak_deps=False \ + gcc \ + && dnf clean all
50-54: Clarify uv usage to target the app venv; optionally mirror UV_ env in final*
- The comment suggests
uv pip install <dependency>, which by default may target the system interpreter. Since PATH puts/app-root/.venv/binfirst, it often works, but being explicit avoids split-brain installs in derived images.- Consider mirroring UV_* envs (UV_PYTHON_DOWNLOADS=0, etc.) in the final stage to avoid interpreter downloads in derived images.
Apply to update guidance:
-# Add uv to final image for derived images to add additional dependencies -# with command: -# $ uv pip install <dependency> +# Add uv to final image for derived images to add additional dependencies. +# Prefer installing into the app virtualenv explicitly: +# $ uv pip install --python /app-root/.venv/bin/python <dependency> RUN pip3.12 install "uv==0.8.11"Additionally, consider adding these envs in the final stage (near other ENV lines) to reduce surprises for downstream users:
ENV UV_COMPILE_BYTECODE=0 \ UV_LINK_MODE=copy \ UV_PYTHON_DOWNLOADS=0pyproject.toml (3)
61-69: PyTorch CPU index: keep only if the base image installs torch; otherwise remove or gateIf we proceed with excluding PyTorch from the base (per PR objective), keeping a global uv index/sources block for PyTorch can surprise consumers and alter resolution behavior unnecessarily.
- If torch is removed from the base install, move this block to the derived image that opts into PyTorch, or document why it must remain here.
Option to remove from base:
-[[tool.uv.index]] -name = "pytorch-cpu" -url = "https://download.pytorch.org/whl/cpu" -explicit = true -[tool.uv.sources] -torch = [{ index = "pytorch-cpu" }]
72-90: Unify twine requirement across groupsThere’s a mismatch between
twine>=6.1.0here andtwine>=5.1.1in the build group below. Aligning on one constraint reduces confusion. Given you already use >=6.1.0 here, suggest matching that in the build group.No change needed here; see suggested change in lines 140-143.
140-143: Align twine constraint with dev groupTo avoid conflicting constraints across groups, propose upgrading the build group to match dev:
build = [ "build>=1.2.2.post1", - "twine>=5.1.1", + "twine>=6.1.0", ]
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
Containerfile(3 hunks)pyproject.toml(4 hunks)
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-08-18T10:56:55.302Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.302Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:58:14.914Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.914Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:55:18.873Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:43-45
Timestamp: 2025-08-18T10:55:18.873Z
Learning: The lightspeed-stack project updates dependencies every sprint as part of their regular maintenance cycle, which explains their preference for exact dependency pins rather than version ranges.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:57:39.258Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.258Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T11:45:59.915Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.915Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (3)
Containerfile (2)
2-2: Builder base switch to ubi9/python-312 looks goodUsing the non-minimal builder base is reasonable given native builds (gcc) and uv operations; final stage remains minimal, so no runtime bloat.
20-20: uv pin is good; keep this aligned across stagesPinning uv to 0.8.11 in the builder improves reproducibility. Matches final stage pin.
pyproject.toml (1)
25-25: FastAPI bump LGTMUpgrading to fastapi>=0.115.12 aligns with recent Starlette/Pydantic stacks and is consistent with the project’s regular update cadence.
| # Bundle additional dependencies for library mode. | ||
| RUN uv sync --locked --no-dev --group llslibdev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Contradiction with PR objective: this installs PyTorch into the image
uv sync --locked --no-dev --group llslibdev pulls the entire llslibdev group, which currently includes:
- torch==2.7.1
- peft>=0.15.2
- trl>=0.18.2
This contradicts the stated PR goal: “no pytorch and libs that depends on it”.
Please either:
- Remove
torch,peft,trlfromllslibdevand move them to a new optional group (e.g.,ml-train) that is not installed here, or - Stop installing the
llslibdevgroup in this base image and create a separate image or build target for the ML training stack.
Option A (keep llslibdev, exclude ML training libs from base):
-RUN uv sync --locked --no-dev --group llslibdev
+RUN uv sync --locked --no-devThen in pyproject.toml (see separate comment), split the group: keep non-PyTorch deps in llslibdev, move torch, peft, trl into a new group (e.g., ml-train) installed only in derived images.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Bundle additional dependencies for library mode. | |
| RUN uv sync --locked --no-dev --group llslibdev | |
| # Bundle additional dependencies for library mode. | |
| RUN uv sync --locked --no-dev |
🤖 Prompt for AI Agents
In Containerfile around lines 27-28, running "RUN uv sync --locked --no-dev
--group llslibdev" pulls the llslibdev group which currently includes torch,
peft and trl and therefore installs PyTorch contrary to the PR goal; either (A)
modify pyproject.toml to remove torch, peft and trl from the llslibdev group and
add them to a new optional group (e.g., ml-train) so the base image keeps only
non-ML deps and continue to sync llslibdev here, or (B) stop syncing the
llslibdev group in this base Containerfile and instead create a separate build
stage or derived image that runs uv sync for an ML training image which installs
an ml-train group; update the Containerfile accordingly (remove or change the uv
sync line in the base image and add a separate target/stage or image that
installs the ML group) and update pyproject.toml to reflect the new ml-train
group if you choose option A.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
test.containerfile (1)
19-19: Pin uv version for reproducible builds; verify need for llslibdev in test image
- Pin uv to the same version used elsewhere (0.8.11 per PR context) to avoid drift between images.
- If tests don’t require the
llslibdevstack, drop it to speed up builds and avoid pulling in torch; if required, keep as-is.-RUN curl -LsSf https://astral.sh/uv/install.sh | sh +RUN curl -LsSf https://astral.sh/uv/install.sh | sh -s -- -v 0.8.11If
llslibdevisn’t needed for this container’s tests, consider:-RUN uv sync --locked --no-install-project --group dev --group llslibdev +RUN uv sync --locked --no-install-project --group devAlso applies to: 26-26
pyproject.toml (1)
70-90: Unify twine version across groups
devgroup pinstwine>=6.1.0whilebuildusestwine>=5.1.1. Use a single constraint to avoid resolver inconsistencies in different contexts. Suggest bumpingbuildto matchdev.build = [ "build>=1.2.2.post1", - "twine>=5.1.1", + "twine>=6.1.0", ]Also applies to: 140-143
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
.github/workflows/pylint.yaml(1 hunks).github/workflows/pyright.yaml(1 hunks).github/workflows/unit_tests.yaml(1 hunks)Containerfile(3 hunks)pyproject.toml(4 hunks)test.containerfile(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- Containerfile
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-08-18T10:56:55.302Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.302Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:58:14.914Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.914Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:55:18.873Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:43-45
Timestamp: 2025-08-18T10:55:18.873Z
Learning: The lightspeed-stack project updates dependencies every sprint as part of their regular maintenance cycle, which explains their preference for exact dependency pins rather than version ranges.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:57:39.258Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.258Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T11:45:59.915Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.915Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: build-pr
- GitHub Check: e2e_tests
🔇 Additional comments (3)
pyproject.toml (3)
25-38: Base deps look lean and aligned with the goal (no torch in core runtime)Keeping the base dependency set minimal and torch-free matches the PR objective and helps keep the core runtime reasonable.
61-68: PyTorch CPU index config is fine; ensure it doesn’t pull torch into the base imageGood to keep the CPU index mapped to torch for opt-in groups. Just double-check your runtime image build doesn’t install any group that includes torch.
91-138: No llslibdev in final runtime imagellslibdev is only installed in the builder stage and not in the final image:
- Builder stage (
FROM … AS builder):
RUN uv sync --locked --no-dev --group llslibdev- Final stage (
FROM …-minimal) has nouv sync --group llslibdev✔️ Final runtime image does not include torch, peft, or trl.
| python-version: '3.12' | ||
| - name: Install dependencies | ||
| run: uv sync | ||
| run: uv sync --group llslibdev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: install dev tools (pylint) — use dev group, not llslibdev
llslibdev doesn’t include pylint; this step will fail to find pylint for uv run pylint. Install the dev group (which contains pylint). Also avoids pulling heavy ML deps unnecessarily into the linter job.
- run: uv sync --group llslibdev
+ run: uv sync --group dev📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: uv sync --group llslibdev | |
| run: uv sync --group dev |
🤖 Prompt for AI Agents
In .github/workflows/pylint.yaml around line 21, the workflow syncs the wrong
poetry group (llslibdev) which doesn't include pylint; change the command to
install the dev group instead by replacing the uv sync --group llslibdev
invocation with uv sync --group dev so pylint (and other dev tools) are
installed for the linter job and heavy ML deps from the llslibdev group are not
pulled into this job.
| python-version: '3.12' | ||
| - name: Install dependencies | ||
| run: uv sync | ||
| run: uv sync --group llslibdev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: install dev tools (pyright) — use dev group, not llslibdev
pyright is in the dev group; installing only llslibdev will not provide the pyright CLI and pulls in heavy ML deps unnecessarily.
- run: uv sync --group llslibdev
+ run: uv sync --group dev📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: uv sync --group llslibdev | |
| run: uv sync --group dev |
🤖 Prompt for AI Agents
.github/workflows/pyright.yaml around line 21: the workflow installs the wrong
dependency group ("llslibdev") which does not provide the pyright CLI and pulls
in heavy ML deps; change the group to "dev" so the pyright CLI is installed
(i.e., run the uv sync command with the dev group) and remove reliance on the
llslibdev group to avoid unnecessary dependencies.
| run: uv --version | ||
| - name: Install dependencies | ||
| run: uv pip install --python ${{ matrix.python-version }} -e . | ||
| run: uv pip install --python ${{ matrix.python-version }} --group llslibdev . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken install command: uv pip install --group isn’t a valid flag — use uv sync with groups
uv pip install doesn’t support dependency groups; this will error. Use uv sync to install base + selected groups and the project (default behavior).
- run: uv pip install --python ${{ matrix.python-version }} --group llslibdev .
+ run: uv sync --group dev --group llslibdevNote: If your unit tests don’t need the llslibdev stack, prefer --group dev only to keep CI lean.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| run: uv pip install --python ${{ matrix.python-version }} --group llslibdev . | |
| run: uv sync --group dev --group llslibdev |
🤖 Prompt for AI Agents
.github/workflows/unit_tests.yaml around line 27: the workflow uses an invalid
command flag "uv pip install --group" which will fail because uv's "pip install"
subcommand doesn't accept groups; replace this with "uv sync" to install the
project plus dependency groups (e.g., uv sync --python ${{ matrix.python-version
}} --group llslibdev .), or if tests only need dev deps use uv sync --group dev
to keep CI lean.
tisnik
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
.github/workflows/pylint.yaml (1)
21-21: Fix linter install group: use dev (contains pylint), not llslibdevThe linter job will fail to find pylint when syncing llslibdev. Install the dev group to avoid pulling heavy ML deps and ensure pylint is available.
- run: uv sync --group llslibdev + run: uv sync --group dev
🧹 Nitpick comments (3)
pyproject.toml (3)
89-89: Unify twine requirement across groups to avoid resolver inconsistenciesTwo different lower bounds are specified (>=6.1.0 vs >=5.1.1). Pick one and use it everywhere; suggest aligning on the newer dev constraint.
[dependency-groups] dev = [ @@ - "twine>=6.1.0", + "twine>=6.1.0", ] @@ build = [ "build>=1.2.2.post1", - "twine>=5.1.1", + "twine>=6.1.0", ]Also applies to: 140-140
93-96: Typo in comment: “dependecies” → “dependencies”Minor spelling fix for clarity in documentation/comments.
- # To check llama-stack API provider dependecies: + # To check llama-stack API provider dependencies:
102-103: Document why psycopg2-binary is required (context for future maintainers)Add a short note that it’s needed by some llama-stack providers, matching prior discussion. Helps avoid repeated “why not psycopg v3?” threads.
- "psycopg2-binary>=2.9.10", + # Required by some llama-stack providers; keep psycopg2-binary for compatibility + "psycopg2-binary>=2.9.10",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (6)
.github/workflows/pylint.yaml(1 hunks).github/workflows/pyright.yaml(1 hunks).github/workflows/unit_tests.yaml(1 hunks)Containerfile(3 hunks)pyproject.toml(4 hunks)test.containerfile(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (4)
- .github/workflows/unit_tests.yaml
- test.containerfile
- .github/workflows/pyright.yaml
- Containerfile
🧰 Additional context used
🧠 Learnings (5)
📚 Learning: 2025-08-18T10:56:55.302Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:0-0
Timestamp: 2025-08-18T10:56:55.302Z
Learning: The lightspeed-stack project intentionally uses a "generic image" approach, bundling many dependencies directly in the base runtime image to work for everyone, rather than using lean base images with optional dependency groups.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:58:14.914Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:47-47
Timestamp: 2025-08-18T10:58:14.914Z
Learning: psycopg2-binary is required by some llama-stack providers in the lightspeed-stack project, so it cannot be replaced with psycopg v3 or moved to optional dependencies without breaking llama-stack functionality.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:55:18.873Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:43-45
Timestamp: 2025-08-18T10:55:18.873Z
Learning: The lightspeed-stack project updates dependencies every sprint as part of their regular maintenance cycle, which explains their preference for exact dependency pins rather than version ranges.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T10:57:39.258Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:59-59
Timestamp: 2025-08-18T10:57:39.258Z
Learning: In the lightspeed-stack project, transitive dependencies like faiss-cpu are intentionally pinned as top-level dependencies to maintain better control over the dependency graph and avoid version conflicts when bundling ML/LLM tooling packages.
Applied to files:
pyproject.toml
📚 Learning: 2025-08-18T11:45:59.915Z
Learnt from: matysek
PR: lightspeed-core/lightspeed-stack#292
File: pyproject.toml:77-77
Timestamp: 2025-08-18T11:45:59.915Z
Learning: torch==2.7.1 is available on PyPI and is a valid version that can be used in dependency specifications.
Applied to files:
pyproject.toml
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-pr
🔇 Additional comments (1)
pyproject.toml (1)
62-69: PyTorch remains bundled via llslibdev—please clarify PR intent
It looks like the “llslibdev” dependency-group still includestorch==2.7.1(and its peerspeft/trl), and your containerfiles/workflows all install that group. The CPU-only index simply switches GPU→CPU wheels, but does not exclude PyTorch itself.Key locations:
- pyproject.toml Lines 92–131:
– dependency-groupllslibdevcontainstorch,peft,trl- Containerfile Line 28:
–RUN uv sync --locked --no-dev --group llslibdev- test.containerfile Line 26:
–RUN uv sync … --group dev --group llslibdev- .github/workflows/pyright.yaml & pylint.yaml:
–uv sync --group llslibdevIf you intended to exclude PyTorch (and its dependent libs), please remove them from
llslibdevor omit that group where appropriate. Otherwise, update the PR description to reflect that CPU-only PyTorch will still be bundled.
Description
LCORE-497, LCORE-498, LCORE-499: Bundle most of ansible, assist-installer, default run.yaml dependencies, no pytorch and libs that depends on it.
Type of change
Related Tickets & Documents
Checklist before requesting a review
Testing
Summary by CodeRabbit
Chores
Dependencies
CI / Workflows
Documentation