Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to release the candidate branch into production, which will trigger a deployment.
It was created by the [Production PR] action.

@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 20, 2025

Comp AI - Code Vulnerability Scan

Analysis in progress...

Reviewing 13 file(s). This may take a few moments.


Powered by Comp AI - AI that handles compliance for you | Reviewed Nov 20, 2025, 05:54 PM

@vercel
Copy link

vercel bot commented Nov 20, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
app (staging) Ready Ready Preview Comment Nov 20, 2025 7:42pm
portal (staging) Ready Ready Preview Comment Nov 20, 2025 7:42pm

@CLAassistant
Copy link

CLAassistant commented Nov 20, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.

✅ Marfuen
✅ Itsnotaka
❌ github-actions[bot]
You have signed the CLA already but the status is still pending? Let us recheck it.

Co-authored-by: mintlify[bot] <109931778+mintlify[bot]@users.noreply.github.com>
Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@vercel vercel bot temporarily deployed to staging – portal November 20, 2025 17:32 Inactive
@vercel vercel bot temporarily deployed to staging – app November 20, 2025 17:32 Inactive
* fix(portal): fix downloading device agent on safari

* fix(download-agent): align filenames and add logging

---------

Signed-off-by: Mariano Fuentes <marfuen98@gmail.com>
Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 20, 2025

Comp AI - Code Vulnerability Scan

Analysis in progress...

Reviewing 13 file(s). This may take a few moments.


Powered by Comp AI - AI that handles compliance for you | Reviewed Nov 20, 2025, 06:37 PM

* fix(portal): fix downloading device agent on safari

* fix(download-agent): align filenames and add logging

* fix(download-agent): remove temporary logging

---------

Signed-off-by: Mariano Fuentes <marfuen98@gmail.com>
Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
* fix(portal): fix downloading device agent on safari

* fix(download-agent): align filenames and add logging

* fix(download-agent): remove temporary logging

* fix(download-agent): remove logging for invalid download token

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 20, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV scan: 2 high npm CVEs (xlsx: prototype pollution + ReDoS; ai: filetype-whitelist bypass) and code-level injection issues (command execution and SQL/IDOR risks) in several files.


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 10 file(s) with issues

🔴 .github/workflows/trigger-tasks-deploy-main.yml (HIGH Risk)

# Issue Risk Level
1 Secrets exposed to deploy command (env vars) may be exfiltrated HIGH
2 Executing remote package via bunx (trigger.dev@4.0.6) with secrets HIGH
3 Third-party action oven-sh/setup-bun@v2 not pinned to commit SHA HIGH
4 Official actions pinned to major versions, not commit SHAs HIGH
5 Custom/self-hosted runner 'warp-ubuntu-latest-arm64-4x' may be untrusted HIGH
6 bun install may execute lifecycle scripts during install HIGH
7 No integrity verification for fetched packages or actions HIGH

Recommendations:

  1. Avoid exposing long-lived secrets to any step that runs untrusted code. Use GitHub OIDC (workload identity) or short-lived tokens for deployments instead of repository secrets where possible.
  2. Do not run remote/one-off packages with access to secrets. Preinstall the tool/vendor the CLI (or pin and verify its checksum) and run the local binary, or ensure the package is run in a least-privileged context with no secrets.
  3. Pin GitHub Actions to full commit SHAs (not just major/minor tags) so the exact code is auditable and immutable: e.g., uses: oven-sh/setup-bun@.
  4. Treat self-hosted runners as high-trust components. If using self-hosted runners, restrict which repos can use them, run them in isolated environments, enforce image hardening, and monitor for compromise. Prefer GitHub-hosted runners if you cannot fully trust/harden the self-hosted runner.
  5. When installing packages in CI, avoid running package lifecycle scripts with access to secrets. Use flags that disable scripts where supported or audit lifecycle/script contents prior to allowing them to run. For bun, prefer --ignore-scripts if you cannot audit scripts, and ensure lockfile integrity is enforced.
  6. Implement integrity/attestation controls: pin action SHAs, verify package checksums or use lockfiles generated by reproducible package managers, enable Dependabot or similar for supply-chain alerts, and consider Sigstore/rekor for signing artifacts.
  7. Limit the scope of secrets passed to jobs: use environment protection rules, required reviewers for environment secrets, and only export the minimal tokens needed for a job. Mask/remove secrets from debug logs (avoid --log-level debug when secrets are available).

🔴 apps/api/buildspec.yml (HIGH Risk)

# Issue Risk Level
1 curl bash installer execution (supply-chain, remote script execution)
2 Dependency install can run package lifecycle scripts (supply-chain risk) HIGH
3 Shell commands use unvalidated env vars (potential injection/misconfig) HIGH
4 Missing validation for AWS/ECR/ECS env vars (AWS_ACCOUNT_ID, ECR, cluster) HIGH
5 Logging and ls output may leak secrets or sensitive file names to logs HIGH
6 Copying prisma and node_modules into image can embed secrets in image HIGH
7 Using 'latest' tag fallback can cause deploys of wrong or stale images HIGH
8 Fallback install steps use --ignore-scripts unpredictably altering installs HIGH

Recommendations:

  1. Avoid piping remote scripts to a shell. Download the bun installer, verify its integrity (checksum or signature) and run it from a controlled environment (or pin to a vendor-provided archive).
  2. Install dependencies from a verified lockfile and disable lifecycle scripts where possible. Prefer flags that enforce reproducible installs (e.g., --frozen-lockfile) and explicitly disallow running package lifecycle scripts in CI unless required.
  3. Validate and fail-fast on all critical CI env vars prior to use. Add checks for AWS_ACCOUNT_ID, ECR_REPOSITORY_URI, ECS_CLUSTER_NAME, ECS_SERVICE_NAME, and APP_NAME (the script already checks some vars but several critical AWS/ECS vars are missing).
  4. Avoid printing directory listings and sensitive envs in CI logs. Remove or redact ls/echo commands that might expose filenames or secrets (or restrict log retention/visibility).
  5. Avoid copying entire node_modules/.prisma or other build artifacts that might contain secrets into the image. Use reproducible multi-stage Docker builds that install only production dependencies in the final image and avoid bundling workspace-local credential files.
  6. Do not rely on the 'latest' fallback for IMAGE_TAG. Fail the build if COMMIT_HASH is not available, or use a deterministic fallback (e.g., CI build ID) and ensure deploys reference immutable tags.
  7. If ignoring lifecycle scripts is required as a fallback, treat it as a signal to fail the build or make the behavior explicit; document differences and ensure CI reproducibility across fallback branches.

🔴 apps/api/src/attachments/attachments.service.ts (HIGH Risk)

# Issue Risk Level
1 Missing auth checks/IDOR: orgId/entityId used without verifying caller HIGH
2 Direct use of user IDs in DB queries (possible ORM/SQL abuse) HIGH
3 Unvalidated filename used in S3 key — key/path injection risk HIGH
4 Client-controlled ContentType accepted without validation (content-type spoofing) HIGH
5 Original filename stored in S3 metadata — header injection risk HIGH
6 Generates signed URLs for all attachments — mass leakage/DoS risk HIGH
7 No malware/virus scanning or content inspection before storing files HIGH
8 No cleanup if DB write fails after S3 upload — orphan objects HIGH

Recommendations:

  1. Enforce authorization checks in the service/controller layer: verify the authenticated caller has access to the provided organizationId/entityId before any DB or S3 operations to prevent IDORs.
  2. Tie DB queries to authenticated identity and assert organization membership/roles server-side; never trust caller-supplied organizationId/entityId without authorization checks.
  3. Sanitize and validate all path components used in S3 keys (organizationId, entityId, entityType) — restrict allowed characters/lengths and reject or canonicalize unexpected input. Although fileName is sanitized in code, other path components are not.
  4. Validate content-type beyond the client-provided MIME type: check file signatures (magic bytes) and/or whitelist allowed extensions and map to expected MIME types before storing and before returning any signed URL.
  5. Sanitize metadata values (already implemented for fileName) and enforce strict checks on any user-controlled metadata. Continue stripping control characters and non-ASCII; also enforce max lengths. Consider avoiding storing sensitive user-provided values in object metadata.
  6. Avoid generating signed URLs in bulk. Generate per-request signed URLs after authorization checks. If bulk generation is required, paginate and rate-limit responses to reduce mass leakage/DoS risk.
  7. Integrate malware/AV scanning and content inspection (e.g., ClamAV, commercial scanning services) prior to committing objects to long-term storage or before returning any download URL. Consider quarantining files that fail scanning.
  8. Make S3 upload + DB create effectively transactional: if DB persist fails after successful S3 PutObject, delete the S3 object (best-effort retry) to avoid orphaned objects. Conversely, consider storing DB record first (with a state/state machine) then moving object to a permanent location once DB write confirmed.
  9. Ensure S3 objects are stored private (no public ACL) and rely on presigned URLs for access. Restrict AWS credentials and S3 permissions to the minimum required (principle of least privilege).
  10. Add rate limiting, logging, and monitoring for signed URL generation endpoints to detect abuse and reduce DoS/exfiltration risk.
  11. Consider bounding resource usage (e.g., max number of attachments returned, pagination) when listing attachments and generating URLs.

🟡 apps/app/customPrismaExtension.ts (MEDIUM Risk)

# Issue Risk Level
1 Spawning prismaBinary may execute attacker-provided executable MEDIUM
2 Copies schema from resolved path without validating or checking symlinks MEDIUM
3 Schema path used in prisma generate can be attacker-controlled MEDIUM
4 Child process inherits full process.env, potentially exposing secrets MEDIUM
5 Dynamic env var name from options reads arbitrary host env vars MEDIUM

Recommendations:

  1. Validate and whitelist prisma binary absolute path before spawn: resolve the binary path, verify it points to a regular file (not a symlink) owned by a trusted user, check file mode to ensure it's executable, and restrict to known node_modules locations or an allowlist of safe paths.
  2. When copying the published schema, detect and reject symlinks and ensure the resolved realpath is inside an expected package directory (e.g., node_modules/@trycompai/db/dist). Use fs.lstat to detect symlinks and fs.realpath to compare canonical paths. Fail the build if validation fails.
  3. Normalize and restrict schema source candidates: only accept schema files from within the package's dist directory (or other explicit, allowlisted locations). Avoid copying files from arbitrary resolved paths without provenance checks.
  4. Do not pass ...process.env to spawned processes. Build an env object containing only the variables required for the generation (e.g., DATABASE_URL, DIRECT_URL/DIRECT_DATABASE_URL, PRISMA_HIDE_UPDATE_MESSAGE), and explicitly omit secrets that are not needed. If additional vars are necessary, explicitly enumerate them in code or configuration.
  5. Sanitize and whitelist directUrlEnvVarName: if an external option provides the name of an env var, validate it against a predefined allowlist (e.g., ["DIRECT_URL","DIRECT_DATABASE_URL","DATABASE_URL"]) or match a strict pattern, and document the allowed names. Avoid reading arbitrary host env vars by name without validation.
  6. Run child processes with least privilege and isolation where possible: use spawn with options to limit uid/gid where supported, set a minimal PATH, and, if applicable, run generation inside an isolated environment or container during build.
  7. Log and fail loudly on unexpected conditions: if schema resolution yields a path outside expected locations or if any validation step fails (symlink detection, ownership checks, binary verification), stop the build rather than silently proceeding.
  8. Prefer execFile/spawn with argument arrays and avoid shell execution to reduce shell-injection risks. The code already uses spawn with an argv array for prisma, which is good — keep that pattern and ensure prismaBinary is an absolute path validated as above.

🟡 apps/portal/src/app/(app)/(home)/[orgId]/components/tasks/DeviceAgentAccordionItem.tsx (MEDIUM Risk)

# Issue Risk Level
1 Download token placed in URL query string (referer/logs leakage) MEDIUM
2 Download endpoint uses GET with token (CSRF/referrer exposure) MEDIUM
3 Token issuance may lack authentication/authorization checks MEDIUM
4 No validation of token response before using it MEDIUM
5 Org/employee IDs sent to API without client-side validation MEDIUM
6 Console.error may leak sensitive error details in browser logs MEDIUM

Recommendations:

  1. Avoid placing auth tokens in URL query strings. Instead: have the download endpoint require an Authorization header (Bearer token) or return the file bytes directly from the token-issuance endpoint (POST) and stream them to the client (fetch -> blob -> createObjectURL) so tokens are not exposed in server logs, referer headers, or browser history.
  2. Do not serve the downloadable file via a GET endpoint that accepts tokens in query strings. Use POST or an authenticated endpoint that validates the caller (session cookie or Authorization header). If you must use a tokenized GET, ensure tokens are single-use, extremely short-lived, tied to the requesting principal, and revoked after first use.
  3. Enforce strict server-side authentication and authorization when issuing tokens. The server must verify the requestor's session/identity and that the orgId/employeeId in the request matches their privileges. Never rely on client-side checks.
  4. Validate token response before using it: check response.ok, verify the JSON shape, assert token exists and matches expected format, and handle missing/malformed tokens gracefully (do not proceed to construct a download URL).
  5. Minimize sensitive information sent from the client. While server-side validation is authoritative, avoid sending unnecessary identifiers from the client where possible. If identifiers must be sent, validate types/values client-side to reduce malformed requests and ensure CSRF protections are in place server-side.
  6. Avoid logging raw error objects to the console in production. Replace console.error(error) with sanitized error messages or log to a secure server-side logging system. Present end users with generic, non-sensitive error messages.
  7. Consider serving downloads with Content-Disposition: attachment from an authenticated endpoint and using short-lived, scoped tokens stored server-side (or in an HttpOnly cookie) to reduce exposure in logs and referer headers.
  8. If security-sensitive actions are performed during download issuance, protect endpoints with CSRF tokens or require same-site cookies/Authorization headers and verify the Origin/Referer server-side.

🟡 apps/portal/src/app/api/download-agent/route.ts (MEDIUM Risk)

# Issue Risk Level
1 Download token sent in URL may leak via referer or server logs MEDIUM
2 Error logger records token, exposing sensitive token in logs MEDIUM
3 HEAD endpoint doesn't consume/delete token, enabling token probing MEDIUM
4 No rate limiting on download endpoints allows brute-force token guessing MEDIUM
5 No explicit validation of token expiry/createdAt before use MEDIUM

Recommendations:

  1. Accept the download token in an Authorization header or request body instead of as a URL query parameter to avoid referer and proxy leakage. If query params must be used, ensure strict short TTLs and single-use semantics.
  2. Avoid logging raw tokens. Redact or hash tokens before logging (e.g., log only a truncated hash or token fingerprint). Remove token from structured error logs in production.
  3. Make tokens single-use for both GET and HEAD. Either consume/delete the token on HEAD or disallow HEAD for token-protected downloads. At minimum, treat HEAD the same as GET in terms of single-use semantics to prevent probing.
  4. Implement rate limiting and anomaly detection on the download endpoints (per IP and per token prefix), and add exponential backoff / temporary blocking after repeated invalid attempts.
  5. Validate token metadata server-side: check createdAt against an explicit expiry, verify any intended usage flags, and rely on KV TTLs as an additional safeguard. Consider storing only hashed tokens in KV and comparing hashes to avoid logging/storage of raw tokens.

🟡 apps/portal/src/app/api/download-agent/token/route.ts (MEDIUM Risk)

# Issue Risk Level
1 Potential command injection via createFleetLabel with unsanitized inputs MEDIUM
2 Missing strict validation for orgId and employeeId fields MEDIUM
3 Client-supplied 'os' can override detection and be spoofed MEDIUM
4 Sensitive identifiers logged (userId, memberId, orgId, employeeId) MEDIUM
5 Error stack logged may leak secrets or internal details MEDIUM
6 Download token not bound to user/IP or single-use MEDIUM

Recommendations:

  1. Inspect createFleetLabel implementation: if it invokes shell commands (exec/spawn) or writes scripts, ensure all inputs (employeeId, memberId, os, paths) are strictly validated/escaped or, better, use safe APIs that avoid shelling out. If createFleetLabel is purely DB/file metadata operations, confirm there is no exec/eval usage and document it.
  2. Enforce strict schemas for orgId and employeeId (e.g., UUID or fixed regex) and validate types/lengths. Use a runtime validator (zod/joi/ts-schema) to reject unexpected values rather than only checking truthiness.
  3. Do not allow unattended spoofing by relying solely on a client-supplied 'os'. Either require server-side detection only or require that the client-supplied os be authorized (e.g., via an additional attestation), and log when overrides occur. Consider rejecting an 'os' override for sensitive operations.
  4. Redact or hash identifiers in logs in production (e.g., log truncated IDs or salted hashes) and keep full IDs only in secure audit logs with restricted access. Add log-level controls so production logs do not emit sensitive fields by default.
  5. Avoid logging full error stacks in production. Log a safe error identifier/message and store full stack traces in a secure diagnostics system accessible only to authorized engineers. Sanitize any error data before logging.
  6. Bind short-lived tokens to context: consider binding to the requesting userId and optionally IP range or user agent; mark tokens single-use by deleting the KV entry on first use; keep TTL short (5 minutes is OK) and add strict usage checks in download handler.
  7. Add rate limiting (per user/org/IP) and monitoring on this endpoint to reduce abuse and token brute-force attempts.
  8. Add input size/type checks for request body and headers and fail fast on malformed JSON or unexpected fields.

🟡 apps/portal/src/app/api/download-agent/utils.ts (MEDIUM Risk)

# Issue Risk Level
1 Unsanitized DB params (userId/orgId) may enable SQL injection MEDIUM
2 Distinct logs/responses allow user/org enumeration MEDIUM
3 Logging raw userId/orgId can leak sensitive identifiers MEDIUM

Recommendations:

  1. Validate and sanitize userId and orgId before using them in DB calls (e.g., ensure expected format/length, UUID checks, reject unexpected characters).
  2. Use the ORM's parameterized API (as in findFirst/findUnique) and avoid raw query strings; if raw SQL is ever used, convert to parameterized/prepared statements or use the ORM query builder.
  3. Unify error responses for lookup failures (do not reveal whether member or organization is missing). Return a generic authorization/lookup error and appropriate HTTP status code.
  4. Redact or hash identifiers in logs (e.g., only log a truncated/hash value or a safe correlation id) and avoid logging raw user/org IDs in production.
  5. Apply input validation and limits to userAgent (max length, sanitize control characters) before parsing to avoid resource exhaustion or unusual parsing edge cases.
  6. Add rate limiting and monitoring around these endpoints to reduce the effectiveness of enumeration attacks.

🔴 packages/db/src/postinstall.ts (HIGH Risk)

# Issue Risk Level
1 Executes workspace prisma binary (node_modules/.bin), enabling attacker RCE HIGH
2 Copies schema from package paths without validation, enabling supply-chain tamper HIGH
3 Prisma generate can invoke schema generators, enabling arbitrary code execution HIGH
4 TOCTOU: existence checks before exec allow binary swap/race attacks HIGH
5 Passes full process.env and inherits stdio; child may access or exfiltrate secrets HIGH

Recommendations:

  1. Avoid executing filesystem-resolved .bin files from the workspace. Prefer a vetted Prisma CLI binary installed via the package manager with integrity verification (e.g., lockfile + package manager install), or use an official programmatic API or a pinned release binary with checksum verification instead of walking node_modules/.bin.
  2. Validate schema.prisma before copying/using: verify a checksum or signature produced at build-time, reject or sandbox schemas that include unexpected generator blocks, or only copy from a single explicit trusted path rather than searching upward through workspaces/INIT_CWD.
  3. Mitigate generator abuse: parse the schema and deny/strip non-official or external generator providers, or run prisma generate inside a restricted environment (unprivileged user, container, chroot) that limits the impact of arbitrary generator execution.
  4. Harden exec and TOCTOU defenses: re-validate the resolved binary immediately before exec (compare inode/device/mtime/hash), open the file securely (use fs.open with no-follow where available) and execute via a verified file descriptor or use OS-level protections. Avoid separate exists checks followed by exec where possible.
  5. Harden the child process environment: don't pass the full process.env blindly. Construct a minimal env with only necessary variables, remove secrets, and avoid stdio: 'inherit' — capture output (pipe) and sanitize logs before exposing them.
  6. Lock down filesystem permissions and ownership: check file ownership and permission bits before copying/executing (reject world-writable files or files not owned by expected user), and ensure copied schema lives in a directory with restrictive permissions.
  7. Add stronger logging and fail-open policies: when generate is required for deploy, fail fast with clear errors; when running on CI or deploy agents, prefer pre-generated clients to avoid runtime generation.
  8. Consider signing/attestation: if distributing schema or binaries across packages in a monorepo, provide signatures or use an artifact server that provides integrity guarantees rather than implicit file resolution.

🔴 packages/docs/cloud-tests/gcp.mdx (HIGH Risk)

# Issue Risk Level
1 Prerequisite asks for Owner/Editor access (overly broad) HIGH
2 Creates and requires pasting long-lived service account JSON key HIGH
3 No guidance on secure storage or rotation of service account keys HIGH
4 Instructions grant roles at organization level (over-privileged) HIGH
5 Documentation lists inconsistent/contradictory IAM roles (misconfig risk) HIGH
6 No recommendation to use Workload Identity Federation or short-lived creds HIGH
7 No instruction to encrypt service account key in transit or at rest HIGH
8 No guidance on audit logging, key expiration, or retention policies HIGH

Recommendations:

  1. Remove Owner/Editor prerequisite and document the exact minimal permissions required. Replace broad Owner/Editor asks with a least-privilege role matrix that states which roles are required and at which resource scope (project vs org).
  2. Avoid asking users to paste long-lived service account JSON keys into the UI. Support Workload Identity Federation, OAuth token exchange, or service account impersonation so no permanent JSON key is required.
  3. If a key must be used, require/implement short-lived credentials or ephemeral tokens and enforce automated rotation and expiration for any service account keys.
  4. Do not grant roles at the organization level unless necessary. Where org-level access is required, document why and limit to the minimal specific role(s) and add IAM Conditions to constrain scope where possible.
  5. Reconcile and publish a single authoritative list of required IAM roles and the exact resource level (project/org). Fix contradictions between the step-by-step creation guidance and the Permissions section.
  6. Encrypt any uploaded credentials at rest using a strong key management solution (e.g., Cloud KMS) and store secrets in a managed secret store (e.g., Secret Manager) rather than plaintext in the database or UI. Ensure TLS for in-transit protection.
  7. Implement and document audit logging and monitoring for credential use (who uploaded/used keys, when, and from where). Alert on anomalous usage and provide an easy way to revoke or rotate keys via the UI.
  8. Provide explicit guidance in the docs on key lifecycle management: rotation schedule, expiration, storage retention, revocation process, and least-privilege review cadence.
  9. Offer implementation alternatives and examples: using Workload Identity Federation, user-managed short-lived tokens, and service account impersonation. Provide instructions for admins who must use service account keys describing mitigations (store in Secret Manager, limit ACLs, rotate frequently).

💡 Recommendations

View 3 recommendation(s)
  1. Upgrade vulnerable npm packages: bump ai to a fixed release (>= 5.0.52 as reported) and upgrade or replace xlsx to a release that addresses GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9 (or remove usage). Update package-lock/bun.lock and test functionality after upgrade.
  2. Eliminate command-exec injection in runtime/generation code: in packages/db/src/postinstall.ts and apps/app/customPrismaExtension.ts, stop executing workspace .bin files or unvalidated prismaBinary paths. Resolve to a pinned, validated binary path (deny symlinks), use execFile/spawn with a fixed argv array, and do not interpolate user-controlled input into shell commands. Re-validate the file immediately before exec.
  3. Prevent SQL/IDOR injection by validating and parameterizing inputs: in apps/portal/src/app/api/download-agent/utils.ts and apps/api/src/attachments/attachments.service.ts, enforce strict formats (e.g., UUID regex) for userId/orgId/entityId, never concatenate inputs into raw queries, and use ORM parameterized APIs (where: { id: userId }) so queries cannot be controlled by client input.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 20, 2025

* refactor(prisma): improve schema resolution logic and add candidates search

* chore(prisma): add script to generate Prisma client after installation

* refactor(prisma): enhance schema resolution and update related logic

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
* refactor(prisma): improve schema resolution logic and add candidates search

* chore(prisma): add script to generate Prisma client after installation

* refactor(prisma): enhance schema resolution and update related logic

* refactor(prisma): update Prisma client generation script and remove old script

* chore(prisma): update postinstall script for Prisma client generation

* chore(prisma): remove postinstall script for Prisma client generation
* chore(prisma): add script to copy schema and generate client in deploy

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
* refactor(prisma): improve schema resolution logic and add candidates search

* chore(prisma): add script to generate Prisma client after installation

* refactor(prisma): enhance schema resolution and update related logic

* refactor(prisma): update Prisma client generation script and remove old script

* chore(prisma): update postinstall script for Prisma client generation

* chore(prisma): remove postinstall script for Prisma client generation

* chore(prisma): add script to copy schema and generate client in deploy

* chore(workflow): update DB package build step in deployment workflow
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 20, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV scan found 2 high vulnerabilities in xlsx@0.18.5 (GHSA-4r6h-8v6p-xvw6 prototype pollution; GHSA-5pgg-2g8v-p4x9 ReDoS) and 1 low in ai@5.0.0 (filetype whitelist bypass, fixed in 5.0.52).


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 10 file(s) with issues

🟡 .github/workflows/trigger-tasks-deploy-main.yml (MEDIUM Risk)

# Issue Risk Level
1 Secrets in env may be exposed in job logs or by subprocesses MEDIUM
2 Debug log-level (--log-level debug) can leak secrets and tokens MEDIUM
3 Mutable action tags (e.g., @v4, @v2) risk supply-chain tampering MEDIUM
4 Custom runner 'warp-ubuntu-latest-arm64-4x' may expose secrets if self-hosted MEDIUM
5 Build/install steps execute repo code with secrets in env (exfiltration risk) MEDIUM
6 actions/checkout may fetch full repo history containing past secrets MEDIUM

Recommendations:

  1. Remove or avoid --log-level debug in CI runs that use real secrets; use info or omit debug. Only enable debug in isolated, ephemeral runs.
  2. Pin actions to specific commit SHAs (e.g., actions/checkout@) rather than mutable tags (v2, v4).
  3. If using self-hosted runners, ensure they are dedicated/isolated for CI, fully patched, have minimal network access, and use ephemeral runners where possible. Prefer GitHub-hosted runners when feasible.
  4. Limit secret exposure to only the step(s) that need them. Do not set deploy secrets at the job level if earlier build/install steps don't require them; instead set env on the specific deploy step.
  5. Set actions/checkout fetch-depth: 1 to avoid checking out full history and reduce exposure of historical secrets.
  6. Use least-privilege and ephemeral tokens for deploys (short-lived tokens, scoped to only required APIs). Rotate tokens regularly.
  7. Avoid running arbitrary install/build scripts with sensitive env present. If build requires secrets, consider using a separate job with restricted permissions and isolated runner, or use build-time secret injection features that don't expose them to other steps.
  8. Ensure tools invoked (e.g., bunx, prisma, trigger.dev CLI) do not write secrets to logs; if they do, update invocation or wrap to prevent printing secrets. Consider sanitizing outputs or using redaction/masking.
  9. Audit and pin third-party action sources and runner images to reduce supply-chain risk; use Dependabot or similar for updates and monitor advisory feeds.

🔴 apps/api/buildspec.yml (HIGH Risk)

# Issue Risk Level
1 curl bash installer (https://bun.sh) allows remote code execution
2 Build logs may expose secrets by echoing env vars or paths HIGH
3 Docker build context copies node_modules and prisma — may leak secrets HIGH
4 Including host node_modules in image can introduce malicious modules HIGH
5 Fallback to non-frozen bun install may install unvetted dependencies HIGH
6 Unvalidated env vars used in docker tag/name can alter commands or tags HIGH
7 Running build as root increases impact if malicious code runs HIGH
8 No secret or malware scanning of artifacts/images before push to ECR HIGH

Recommendations:

  1. Do not pipe remote install scripts into a shell. Instead, download the installer, verify its integrity (checksum/signature) and inspect it before execution. Prefer installing bun from a vetted package manager or pinned binary.
  2. Avoid printing environment variables or potentially sensitive paths/filenames to build logs. Mask secrets in CI (use secret masking features) and remove unnecessary echo/ls statements that reveal repository structure or filenames.
  3. Reduce Docker build context. Add a .dockerignore to exclude node_modules, local .env, and other sensitive files. Copy only the build artifacts required for runtime (use multi-stage builds to build in one stage and copy only dist/ and package manifests into the final image).
  4. Do not copy host node_modules into the image. Rebuild dependencies inside a clean build stage (or install a production-only node_modules in the image) to avoid including unreviewed or tampered modules from the build host.
  5. Require frozen lockfiles; fail the build if the lockfile cannot be used. Remove fallback install paths that ignore the lockfile or ignore install scripts. If you must allow fallbacks, make them explicit and audited.
  6. Treat environment variables used in command arguments as untrusted: validate and sanitize values used in docker tags, repository names or any shell-invoked commands. Use safe argument passing and avoid constructing complex shell commands with unvalidated variables.
  7. Run the build as a non-root user inside containers/CI where possible to reduce blast radius if malicious code executes. In Dockerfiles, set a non-root USER for runtime images and use least-privilege for build processes.
  8. Add automated scanning of built artifacts and container images before pushing: secret detection (git-secrets, truffleHog), vulnerability scanners and container scanners (Trivy, Clair, etc.). Enforce scanning as a gating step in CI.
  9. Use signed, immutable images and repositories. Restrict who/what can push to the ECR repo via IAM. Enforce image provenance and consider image signing (e.g., cosign) for production deployments.

🔴 apps/api/src/attachments/attachments.service.ts (HIGH Risk)

# Issue Risk Level
1 Missing authorization checks on upload/get/delete endpoints HIGH
2 Client-supplied organizationId allows data access spoofing HIGH
3 Unsigned validation of file content/type allows malicious files (SVG/XSS) HIGH
4 UserId can be spoofed if set from client input HIGH
5 Signed URLs generated for all attachments may overexpose objects HIGH
6 Detailed error logs may leak sensitive info (AWS, stacktraces) HIGH

Recommendations:

  1. Enforce authorization at the API boundary and within service calls: verify the caller's identity and that they belong to the provided organizationId and have rights to the target entity before performing upload/get/delete.
  2. Do not trust client-supplied organizationId or userId. Derive organizationId/userId from the authenticated context (JWT/session) or validate them against an authoritative source before using them in DB queries or S3 keys.
  3. Validate uploaded files beyond Content-Type header: check magic bytes, restrict/whitelist allowed MIME types and extensions, explicitly block SVG/HTML-like payloads if they can be rendered in a browser, and integrate malware/AV scanning for uploads.
  4. Limit exposure of S3 objects: keep bucket objects private, generate signed URLs only on-demand, use the shortest practical expiry, and consider additional access controls (token-based checks, proxying downloads through the app, or generating per-request authorization).
  5. Avoid logging raw error objects and sensitive values. Log structured, non-sensitive error codes/messages and capture full stack/exception details in a secure error-tracking system with restricted access.
  6. Harden S3 usage: use least-privilege IAM roles/policies, enforce server-side encryption, set appropriate Content-Disposition/Content-Security headers when serving files, and ensure metadata keys/values follow allowed character sets.
  7. Ensure user-supplied fields used in DB operations are validated and constrained (e.g., check length / allowed characters) even if sanitized for filenames/headers.

🔴 apps/app/customPrismaExtension.ts (HIGH Risk)

# Issue Risk Level
1 Shell injection via manifest.runtime in layer command string HIGH
2 Executing potentially untrusted binaries from workspace/node_modules HIGH
3 Untrusted dependency versions used when adding layer dependencies HIGH
4 Copying schema file without validating path or symlinks HIGH
5 Child process inherits full process.env exposing secrets to child HIGH

Recommendations:

  1. Avoid building shell command strings. Pass executable and args as arrays (no shell interpretation) or strictly validate/whitelist manifest.runtime and the result of binaryForRuntime() before interpolating into a command string.
  2. Do not execute binaries directly from untrusted node_modules without verification. Resolve the prisma binary to a trusted, canonical location, validate it (e.g., checksum/signature), and prefer invoking the project-local CLI via a controlled wrapper or a known, pinned runtime.
  3. Do not trust dependency versions coming from manifests or options without validation. Use pinned versions or an allowlist, validate semver ranges, or fetch versions from a controlled source. Consider locking dependencies in a lockfile and refusing dynamic version injection at build time.
  4. Validate schema paths before copying: resolve real paths (fs.realpath), disallow symlinks that escape the intended package root, ensure the resolved path is within an expected directory, and verify ownership/permissions. Avoid blindly copying files from arbitrary workspace locations.
  5. Avoid passing the full process.env to child processes. Construct a minimal env object containing only necessary variables (e.g., DATABASE_URL or DIRECT_URL) and explicitly omit secrets that are not needed. If you must forward secrets, document and restrict which ones are forwarded and consider redaction/logging controls.

🟡 apps/portal/src/app/(app)/(home)/[orgId]/components/tasks/DeviceAgentAccordionItem.tsx (MEDIUM Risk)

# Issue Risk Level
1 Download token exposed in URL query string (referrer/logs leak) MEDIUM
2 No CSRF protection for POST /api/download-agent/token MEDIUM
3 OrgId/employeeId sent from client can be tampered locally MEDIUM
4 Raw server error text is shown to users via toast MEDIUM

Recommendations:

  1. Avoid placing sensitive tokens in URL query strings. Instead: return the file from a POST response, use a short-lived one-time download endpoint that consumes the token server-side and then serves the file (no token in client-visible URL), or perform the download via fetch() (with the token sent in an Authorization header or in the POST body) and stream the blob to createObjectURL for download so the token is not embedded in a navigable URL.
  2. If the API relies on cookies/session auth, add CSRF protection for state-changing endpoints: use SameSite=strict cookies, validate Origin/Referer headers, and/or implement CSRF tokens (double-submit cookie or hidden token) and verify them server-side. If the API requires an auth header (Bearer token) for this flow, require that instead of cookie-based auth for the token issuance endpoint.
  3. Never trust orgId/employeeId or other authorization parameters from the client. Enforce server-side authentication and authorization: derive org/user context from the authenticated session (or bearer token) and ignore client-supplied orgId/employeeId. Validate that the authenticated user is authorized to request the token for that organization/device.
  4. Do not show raw server error bodies to end users. Log detailed errors server-side and display generic, user-friendly messages to clients (e.g., "Failed to prepare download. Please try again or contact support."). If returning error details is necessary for troubleshooting in certain environments, only include sanitized, non-sensitive diagnostics or restrict to internal debug modes.
  5. Make the download tokens short-lived, single-use, and bound to context (user ID, session ID, IP range and/or user-agent fingerprint) to reduce risk if a token leaks. Invalidate tokens after first use.
  6. If you must include tokens in a URL (legacy constraints), ensure the endpoint that serves downloads does not leak the token in logs or referer: set proper cache-control headers, avoid redirects that expose the URL, and consider serving downloads from the same origin so external Referer leakage is minimized. Still prefer preventing tokens in URLs where possible.

🔴 apps/portal/src/app/api/download-agent/route.ts (HIGH Risk)

# Issue Risk Level
1 Download token value logged on error (sensitive data exposure) HIGH
2 HEAD requests do not invalidate token (token replay possible) HIGH
3 No rate limiting or brute-force protection on token endpoint HIGH
4 Token query param lacks format validation before use HIGH
5 Error logs may contain full error objects exposing internal info HIGH
6 No throttling for large file downloads (bandwidth DoS risk) HIGH

Recommendations:

  1. Never log raw tokens. Redact or hash the token before logging (e.g., log a truncated hash or token ID instead of the full token).
  2. Either consume/invalidate tokens on HEAD requests as well or disable/deny HEAD for this endpoint. Ensure single-use tokens are deleted atomically before streaming if intended to be one-time use.
  3. Add rate limiting and brute-force protections keyed by IP and/or token (e.g., per-token attempt counters, IP backoff, CAPTCHA for repeated failures).
  4. Validate the token format and length before using it (reject obviously malformed values). Enforce strong token entropy and length server-side.
  5. Avoid logging full error objects. Sanitize errors before logging (log only safe fields: an error code, short message, and the redacted token id). Consider structured logs that omit stack traces or sensitive fields in production.
  6. Avoid proxying large file downloads through unthrottled application instances. Use presigned S3 URLs for direct client downloads, implement bandwidth shaping/throttling, streaming limits, connection limits per IP, or an edge caching/CDN to mitigate bandwidth DoS.

🟡 apps/portal/src/app/api/download-agent/token/route.ts (MEDIUM Risk)

# Issue Risk Level
1 Insufficient authorization for createFleetLabel MEDIUM
2 employeeId and orgId not validated or sanitized MEDIUM
3 No check that employeeId belongs to the organization/member MEDIUM
4 Potential command/path injection via createFleetLabel inputs MEDIUM
5 Sensitive PII and error stacks are logged MEDIUM
6 No rate limiting on token generation endpoint MEDIUM
7 Potential CSRF if session uses cookies MEDIUM

Recommendations:

  1. Enforce role-based authorization before creating fleet labels
  2. Validate and sanitize orgId and employeeId formats
  3. Verify employeeId belongs to the org/member before use
  4. Avoid logging stacks/PII; redact sensitive fields
  5. Add rate limiting and CSRF protections on this endpoint

🟡 apps/portal/src/app/api/download-agent/utils.ts (MEDIUM Risk)

# Issue Risk Level
1 Direct use of userId/orgId in DB queries without validation MEDIUM
2 Possible SQL injection depending on ORM and input handling MEDIUM
3 Logging userId/orgId may expose sensitive identifiers MEDIUM

Recommendations:

  1. Validate and sanitize userId and orgId at the API boundary: enforce type, length, and format (e.g., UUID regex if applicable). Reject or normalize invalid values before calling validateMemberAndOrg.
  2. Confirm the ORM (db) uses parameterized queries/escaped parameters. If using raw queries anywhere, switch to parameterized/raw-binding APIs. Prefer ORM query objects (no string concatenation) and avoid constructing SQL with user input.
  3. Enforce authorization checks: ensure the caller is authorized to query the specified org/member (e.g., compare authenticated user id / session against member.userId or organization membership).
  4. Avoid logging raw identifiers. Mask or redact userId/orgId in logs (e.g., log only truncated/hashed IDs or an internal correlation id).
  5. Return minimal error information to callers: avoid distinguishing 'member not found' vs 'organization not found' in ways that enable enumeration. Use generic error codes or 404 without revealing existence of resources.

🔴 packages/db/src/postinstall.ts (HIGH Risk)

# Issue Risk Level
1 Executes potentially attacker-controlled 'prisma' binary from node_modules/ancestor dirs HIGH
2 Copies schema.prisma without validation, enabling arbitrary overwrite via symlinks HIGH
3 Passes full process.env to spawned prisma, risking leakage of secrets HIGH

Recommendations:

  1. Limit binary resolution to the project's own node_modules (do not walk up ancestor directories). Only consider resolve(projectRoot, 'node_modules/.bin', executableName) or verify that the resolved binary is inside a trusted package directory within the same workspace.
  2. Verify Prisma binary integrity before execution (e.g., pinned Prisma dependency and/or checksum/signature verification) or invoke the Prisma CLI via the installed package (require('prisma')/node API or npx) rather than executing an arbitrary file found on disk.
  3. Reject or handle symlinks for the schema source and destination: use fs.lstat to detect and refuse to copy from symlinked sources, create schema files in a secure temporary path and atomically move into place, or ensure destination path is not a symlink (unlink or fail if it is).
  4. Validate the schema file contents or the source package identity before copying (only accept schema.prisma from the package's own directory inside node_modules/@trycompai/db or from an explicitly configured, trusted path).
  5. Do not pass the whole process.env into child processes. Construct a minimal env whitelist (NODE_ENV, PATH as needed) and explicitly remove secrets (e.g., TRIGGER_SECRET_KEY, other sensitive vars) before spawn. Consider using a sanitized copy of process.env.
  6. Require an explicit opt-in (e.g., env var PRISMA_GENERATE_ON_INSTALL=1 or a --force flag from a trusted operator) before running generate in postinstall hooks to avoid automatic execution during install in shared/CI/monorepo environments.
  7. Add logging and failure modes that do not leak secrets (avoid printing full env). When running during CI/automated flows, ensure secrets are not passed to spawned CLIs or logged to stdio.

🔴 packages/docs/cloud-tests/gcp.mdx (HIGH Risk)

# Issue Risk Level
1 Prerequisite demands Owner/Editor project access HIGH
2 Instructions to paste full service account JSON into form HIGH
3 Service account added at organization level HIGH
4 Documentation lists inconsistent roles (potential overprivilege) HIGH
5 No guidance on key rotation or expiry HIGH
6 No mention of short-lived credentials or Workload Identity HIGH
7 No guidance on secure storage/encryption of service keys HIGH

Recommendations:

  1. Remove Owner/Editor as a prerequisite. Document minimal required IAM permissions and prefer narrowly scoped roles at the project or resource level. Provide an explicit list of least-privilege permissions required per feature.
  2. Avoid instructing users to paste raw service account JSON. Instead: a) offer secure upload to a backend that validates and stores the key into a secrets manager (e.g., Google Secret Manager) with access controls, or b) support Workload Identity Federation or OAuth flows so no long-lived JSON key is shared.
  3. Do not require granting roles at the organization level unless absolutely necessary. If org-level access is required, document why and restrict the role with IAM Conditions and tight auditing. Prefer granting roles at the project(s) where scanning runs.
  4. Resolve role inconsistencies and minimize privileges. Clearly document exact roles the integration needs (and at what scope). Replace broad roles (roles/viewer, Owner/Editor) with narrowly scoped roles or custom roles containing only necessary permissions.
  5. Require and document key rotation and TTL for service account keys. Enforce regular rotation and provide guidance or automation (e.g., using Secret Manager versions, alerts, or an automated rotation job).
  6. Support and recommend short-lived credentials (Workload Identity Federation, OAuth tokens, or signed JWT flows) instead of long-lived JSON keys. Document how to configure Workload Identity as a best practice.
  7. Mandate secure storage practices: store keys in a managed secret store, encrypt at rest, limit access via IAM, enable audit logging of secret access, and provide guidance for secure deletion of keys after onboarding.
  8. Add monitoring and alerting guidance: instruct customers to enable and review Cloud Audit Logs for service account key creation/use, enable Security Command Center findings for suspicious usage, and consider automated alerts for unusual key activity.
  9. Provide an explicit security section in the docs covering how keys are transmitted, stored, accessed, rotated, and deleted by the Comp AI application (so users can assess risk before uploading keys).

💡 Recommendations

View 3 recommendation(s)
  1. Upgrade ai to 5.0.52 or later (fixedIn reported). Update package.json to that version, reinstall dependencies and run tests, then confirm the advisory is cleared.
  2. Replace xlsx@0.18.5 with a patched xlsx release that addresses GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9. Update package.json, reinstall deps and run unit/integration tests to ensure no behavioral regressions.
  3. After upgrading, re-run your dependency vulnerability scan (npm/yarn audit or OSV) to verify the GHSA entries are resolved; if a patch is unavailable for xlsx, remove or sandbox usage of the package (avoid parsing untrusted files) until a fix is applied.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 20, 2025

@Marfuen Marfuen merged commit 7b3ec13 into release Nov 20, 2025
10 of 11 checks passed
@claudfuen
Copy link
Contributor

🎉 This PR is included in version 1.60.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants