Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to release the candidate branch into production, which will trigger a deployment.
It was created by the [Production PR] action.

github-actions bot and others added 2 commits November 13, 2025 21:30
* feat(onboarding): make completed items clickable in tracker

- Add clickable links for completed policies, vendors, and risks
- Links navigate to respective detail pages when items are completed
- Extract orgId from pathname for navigation
- Add hover effects to indicate clickability

* feat(onboarding): improve link styling and restore dropdown icons

- Add underline hover effect and pointer cursor for clickable links
- Restore dropdown icons for vendors and risks sections
- Add pointer-events-none to inner content for proper cursor behavior
- Hide expand button in minimized view when completed

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

Comp AI - Code Vulnerability Scan

Analysis in progress...

Reviewing 1 file(s). This may take a few moments.


Powered by Comp AI - AI that handles compliance for you | Reviewed Nov 13, 2025, 09:42 PM

@vercel
Copy link

vercel bot commented Nov 13, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
app (staging) Building Building Nov 13, 2025 9:42pm
1 Skipped Deployment
Project Deployment Preview Comments Updated (UTC)
portal (staging) Skipped Skipped Nov 13, 2025 9:42pm

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@Marfuen Marfuen merged commit 6aa0ae6 into release Nov 13, 2025
9 of 11 checks passed
@comp-ai-code-review
Copy link

Comp AI - Code Vulnerability Scan

Analysis in progress...

Reviewing 1 file(s). This may take a few moments.


Powered by Comp AI - AI that handles compliance for you | Reviewed Nov 13, 2025, 09:43 PM

@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 13, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV scan found one low-severity npm CVE: GHSA-rwvc-j5jr-mgvh in package "ai" v5.0.0 (fixed in 5.0.52).


📦 Dependency Vulnerabilities

🟢 NPM Packages (LOW)

Risk Score: 2/10 | Summary: 1 low CVE found

Package Version CVE Severity CVSS Summary Fixed In
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 1 file(s) with issues

🔴 apps/app/src/app/(app)/[orgId]/components/OnboardingTracker.tsx (HIGH Risk)

# Issue Risk Level
1 Prototype pollution via vendor/risk/policy IDs used as object keys HIGH
2 No auth check: subscribes to triggerJobId without verifying org/job ownership HIGH
3 Unvalidated run.metadata used directly; may expose sensitive fields HIGH
4 Meta fields cast without runtime checks (type confusion) HIGH

Recommendations:

  1. Prevent prototype pollution: never use untrusted strings directly as object keys on plain objects. Use Map or Object.create(null) for maps (e.g., const vendorsStatus = Object.create(null) or new Map()). Explicitly disallow dangerous keys such as 'proto', 'constructor', 'prototype' before assignment.
  2. When you must use plain objects, set keys safely (e.g., map.set(id, value) with Map) or validate id with a whitelist/regex (e.g., /^[a-zA-Z0-9_-]+$/) and reject suspicious values.
  3. Enforce server-side authorization: ensure the triggerJobId data stream is only provided when the authenticated user is authorized for that org/job. Client-side checks are insufficient — scope the subscription token on the server so users cannot subscribe to another org's triggerJobId.
  4. Sanitize/filter run.metadata on the server before exposing it to clients. Remove secrets and any fields not intended for UI consumption. Treat metadata as untrusted when rendering and only render whitelisted fields.
  5. Apply runtime type validation before using metadata fields. Use Array.isArray(...) for arrays and typeof/Number checks for numbers, or a schema validator (zod/io-ts/Joi) to coerce/validate types rather than blind casts like (meta.vendorsTotal as number) || 0.
  6. Avoid implicit truthiness for numeric fields. Use safe parsing: const vendorsTotal = Number(meta.vendorsTotal); if (!Number.isFinite(vendorsTotal)) vendorsTotal = 0;
  7. When showing metadata-derived values in links or DOM, escape/encode or otherwise ensure rendered strings cannot break UI assumptions. Prefer to derive route params from server-validated IDs.
  8. Add logging/monitoring to detect unusual metadata keys (e.g., 'proto') so you can detect attempted attacks.

💡 Recommendations

View 3 recommendation(s)
  1. Upgrade the vulnerable dependency: bump package "ai" to >=5.0.52 in package.json, update the lockfile, reinstall, and run tests to confirm no regressions.
  2. As an immediate mitigation in code that accepts uploads, enforce server-side file validation (check MIME type and file magic bytes, validate/normalize filenames and extensions) before passing files to the SDK.
  3. If you cannot upgrade immediately, add a targeted runtime guard where the SDK is invoked: reject or sanitize uploaded files that deviate from your allowed content types and log attempts for review.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 13, 2025

@claudfuen
Copy link
Contributor

🎉 This PR is included in version 1.59.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants