Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to release the candidate branch into production, which will trigger a deployment.
It was created by the [Production PR] action.

* feat(api): add AI chat endpoint for policy editing assistance, initial draft for ai policy edits

* fix: type error

* feat(policy-editor): integrate AI-assisted policy editing with markdown support

* refactor(api): streamline POST function and enhance markdown guidelines

* refactor(policy-editor): improve policy details layout and diff viewer integration

* refactor(policy-editor): simplify policy details component and enhance AI assistant integration

* refactor(policy-editor): remove unused AI assistant logic and simplify component structure

* feat(ui): add new components to package.json for diff viewer and AI elements

* chore: update lockfile

* refactor(tsconfig): reorganize compiler options and update paths

---------

Co-authored-by: Daniel Fu <itsnotaka@gmail.com>
Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 25, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV scan found 2 HIGH CVEs in xlsx@0.18.5 and 1 LOW CVE in ai@5.0.0; multiple code locations show unvalidated DB inputs (SQL injection risk) for policyId/orgId params.


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 14 file(s) with issues

🟡 apps/api/src/policies/dto/ai-suggest-policy.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing nested validation/type transformation for chatHistory items MEDIUM
2 Role enum not validated at runtime MEDIUM
3 No max length limits on instructions or chat content MEDIUM
4 No input sanitization; stored/rendered content could XSS MEDIUM

Recommendations:

  1. Define a Message DTO (e.g., MessageDto with @IsEnum('user'|'assistant') for role and @IsString() for content) and apply @ValidateNested({ each: true }) and @type(() => MessageDto) to chatHistory.
  2. Add @IsEnum() to validate role values at runtime (or use @isin(['user','assistant'])).
  3. Apply length and size limits: @MaxLength/@minlength on strings (instructions, content) and @ArrayMaxSize/@ArrayMinSize on chatHistory to limit array size.
  4. Do not rely on DTOs for sanitization. Sanitize or escape user-provided content before storing or rendering (e.g., use an HTML sanitizer library when rendering to clients).
  5. Enable Nest's global ValidationPipe with options { whitelist: true, transform: true, forbidNonWhitelisted: true } to enforce DTO validation and type transformation.

🔴 apps/api/src/policies/policies.controller.ts (HIGH Risk)

# Issue Risk Level
1 No validation on route params (id) before service calls HIGH
2 Request bodies passed to service without explicit sanitization HIGH
3 Potential SQL injection if policiesService uses raw queries HIGH
4 OrganizationId header is optional; org-scoping could be bypassed HIGH
5 Exposes authenticated user email in API responses (PII leak) HIGH
6 Policy content sent to external AI service may leak sensitive data HIGH
7 OPENAI_API_KEY only checked for presence, no scope or usage checks HIGH
8 No rate limiting on AI streaming endpoint; abuse risk HIGH

Recommendations:

  1. Validate and sanitize all params and request bodies
  2. Use parameterized queries/ORM and avoid string SQL concatenation
  3. Enforce organization ownership checks in the service layer
  4. Redact or obtain consent before sending policy content to external AI
  5. Add rate limiting and monitoring on the AI streaming endpoint

🟡 apps/app/src/app/(app)/[orgId]/policies/[policyId]/editor/components/PolicyDetails.tsx (MEDIUM Risk)

# Issue Risk Level
1 Unvalidated AI markdown sent to updatePolicy (persistent XSS) MEDIUM
2 Potential XSS when rendering diff/editor from unescaped content MEDIUM
3 No client-side authorization check before calling updatePolicy MEDIUM
4 No input size/complexity limits for AI-proposed content MEDIUM
5 convertContentToMarkdown bug may throw and cause client DoS MEDIUM

Recommendations:

  1. Sanitize and validate AI-proposed markdown before converting/storing. For example: run the markdown -> TipTap JSON conversion through the same validateAndFixTipTapContent pipeline (or an equivalent sanitizer) before calling updatePolicy, and explicitly remove or normalize any HTML/unsafe node types or marks.
  2. Enforce server-side validation and authorization in updatePolicy. The client should not be relied upon for access control — ensure the server checks the caller's permissions and sanitizes/validates the incoming TipTap JSON payload before persisting.
  3. Escape or sanitize content when rendering diffs and editors. Ensure DiffViewer and PolicyEditor render only safe text (or use a secured HTML sanitizer) so stored content cannot inject scripts when diffed or displayed.
  4. Add size and complexity limits for AI-proposed markdown and TipTap JSON (max characters, max nodes depth/number of nodes). Reject or truncate overly large proposals client-side and server-side to avoid CPU/memory exhaustion or DoS.
  5. Harden convertContentToMarkdown: defensively check node shapes (null/undefined), verify node.content is an array before mapping, and ensure text extraction always returns a string. Wrap the conversion with try/catch and fallback to a safe plain-text representation to prevent client crash in presence of malformed documents.
  6. Apply the same sanitization for content coming from the AI assistant as for user-edited content — do not assume AI output is safe. Log and alert on inputs that require automatic fixes so you can monitor potential abuse.

🟡 apps/app/src/app/(app)/[orgId]/policies/[policyId]/editor/components/ai/policy-ai-assistant.tsx (MEDIUM Risk)

# Issue Risk Level
1 No input validation before sendMessage(user input) MEDIUM
2 Unvalidated policyId used directly in API path MEDIUM
3 Backend error.message shown to users (information leak) MEDIUM
4 Tool output passed to onProposedPolicyChange without sanitization MEDIUM
5 API calls rely on same-origin cookies; possible CSRF risk MEDIUM

Recommendations:

  1. Validate and constrain user input on the client where appropriate (length, allowed characters) and — critically — validate and sanitize all input server-side before use. Treat all client-provided chat text as untrusted.
  2. Treat policyId as untrusted input: validate format (e.g., UUID or permitted slug pattern) and enforce authorization checks on the server for every request to /api/policies/:policyId/*. Do not rely on client-side checks.
  3. Do not display raw backend error.message to end users. Log detailed errors server-side; show a generic, user-friendly error message in the UI. If you must show details for debugging, gate them behind an authenticated admin/debug mode.
  4. Sanitize tool outputs before using them in contexts that could execute or render HTML (e.g., innerHTML, markdown rendering). If you only render plain text in React (no dangerouslySetInnerHTML), React will escape values, but still validate/canonicalize before applying to the policy editor. Use a vetted sanitizer (e.g., DOMPurify) when rendering HTML/markdown.
  5. Mitigate CSRF on the server: require explicit anti-CSRF tokens for state-changing endpoints or use same-site cookies (SameSite=strict/lax) combined with requiring a CSRF header or bearer token. Prefer authenticated bearer tokens (Authorization header) for APIs used by SPA backends.

🔴 apps/app/src/app/(app)/[orgId]/risk/(overview)/actions/get-risks-action.ts (HIGH Risk)

# Issue Risk Level
1 Missing runtime validation: orgId/searchParams forwarded raw to getRisks HIGH
2 Potential SQL injection via orgId/searchParams into data layer HIGH
3 Missing authorization: no access check for provided orgId HIGH

Recommendations:

  1. Add runtime validation in getRisksAction (or before calling getRisks): parse and validate orgId (e.g., UUID or known org id format) and searchParams using the existing zod/nuqs schemas instead of relying on TypeScript types alone.
  2. Whitelist and validate dynamic query fields before building queries: ensure sort.id and filter.id are allowed model fields; constrain joinOperator to 'AND'|'OR' (you already parse this in validations — enforce at runtime here as well).
  3. Enforce authorization: verify the caller's session/user has access to the provided orgId before returning data.
  4. Harden the data layer: although Prisma uses parameterized queries, audit getRisks for any raw queries or usages of Prisma.$queryRaw; avoid interpolating user input into raw SQL. Also validate/limit paging (perPage) to prevent abuse.
  5. Add explicit bounds and type coercions for numeric params (page, perPage) and length limits for strings (title) to prevent resource exhaustion or unexpected behavior.

🔴 apps/app/src/app/(app)/[orgId]/risk/(overview)/data/getRisks.ts (HIGH Risk)

# Issue Risk Level
1 Missing auth check: orgId not authorized server-side HIGH
2 Full assignee.user returned, may leak sensitive fields (password) HIGH
3 Unvalidated pagination (page/perPage) allows 0/negative/huge values HIGH
4 Unvalidated sort/filter fields allow arbitrary query keys HIGH
5 filter.value used directly can embed Prisma operators HIGH
6 joinOperator used directly; may allow invalid keys or prototype pollution HIGH

Recommendations:

  1. Validate caller authorization for orgId access
  2. Whitelist allowed sort and filter field names and operators
  3. Clamp page/perPage to safe min/max and reject zero/negative
  4. Sanitize filter values and restrict allowed Prisma operator shapes
  5. Omit sensitive user fields when returning assignee (select allowed fields)

🔴 apps/app/src/app/(app)/[orgId]/risk/(overview)/page.tsx (HIGH Risk)

# Issue Risk Level
1 No authorization check before DB queries; may allow unauthorized data access HIGH
2 orgId used directly in DB queries without validation HIGH
3 Potential SQL injection if downstream code uses raw SQL with user inputs HIGH
4 onboarding.triggerJobId exposed to client (info disclosure) HIGH
5 getAssignees cache may serve stale or cross-tenant data if misused HIGH
6 Search params passed to getRisks; ensure parsing prevents injection HIGH

Recommendations:

  1. Enforce authorization/ACL checks scoped to the orgId before any DB access. Verify the current user is a member of the organization and has the required role/permissions.
  2. Validate and normalize orgId (e.g., ensure it matches expected format like a UUID) before using it in queries. Fail fast on invalid values.
  3. Avoid raw SQL that concatenates user input. If any downstream function (e.g., getRisks) uses raw queries, convert to ORM parameterized queries or use proper parameter binding and input validation.
  4. Treat internal job identifiers as sensitive. Do not serialize or expose onboarding.triggerJobId to client-side code unless explicitly required and authorized. Consider returning a boolean (onboarding active) or a redacted/opaque token instead.
  5. Scope server-side caches to tenant (orgId) and consider TTL/invalidation. Ensure the caching mechanism's key includes orgId and that stale/cross-tenant responses are impossible in multi-tenant deployments.
  6. Ensure searchParamsCache.parse (or equivalent) strictly validates and normalizes all search params. Whitelist allowed fields, enforce types/lengths, and sanitize values before passing them into any DB queries.

🟡 apps/app/src/app/(app)/[orgId]/vendors/(overview)/actions/get-vendors-action.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing input validation for orgId and searchParams MEDIUM
2 No authentication/authorization check before querying data MEDIUM

Recommendations:

  1. Validate and sanitize orgId at runtime (e.g., ensure correct UUID or org id format) before calling getVendors.
  2. Parse and validate searchParams using the existing vendorsSearchParamsCache.parse() (apps/.../data/validations.ts) or an equivalent runtime schema check before passing them to getVendors.
  3. Enforce authentication and org-level authorization in getVendorsAction (or earlier middleware). Verify the requester belongs to orgId and has permission to view vendor data.
  4. Audit the filters parsing logic (whereClause assembly) before enabling complex filters. If you later implement any raw queries, use parameterized queries only.
  5. Add tests that exercise malformed/edge inputs for orgId and searchParams and ensure requests are rejected or normalized.

🟡 apps/app/src/app/(app)/[orgId]/vendors/(overview)/components/VendorsTable.tsx (MEDIUM Risk)

# Issue Risk Level
1 Unvalidated query params sent to server action (SQL injection risk) MEDIUM
2 Potential XSS: vendor/item names may be rendered unsanitized MEDIUM
3 1s polling when onboarding active may enable DoS/resource exhaustion MEDIUM

Recommendations:

  1. Validate and sanitize searchParams server-side before DB use.
  2. Use parameterized queries or ORM to prevent SQL injection.
  3. Escape or sanitize user-provided strings before rendering.
  4. Rate-limit polling, increase interval, or use server push.
  5. Enforce strict input schemas and runtime validation on actions.

🟡 apps/app/src/app/(app)/[orgId]/vendors/(overview)/page.tsx (MEDIUM Risk)

# Issue Risk Level
1 Unvalidated orgId used in DB queries (getVendors/getAssignees/db.onboarding) MEDIUM
2 Possible SQL injection if query helpers use raw params from parsedSearchParams MEDIUM
3 Potential stored XSS from vendorsResult.data rendered in VendorsTable MEDIUM
4 No explicit authorization check that orgId belongs to requester MEDIUM

Recommendations:

  1. Validate orgId (e.g., UUID check) before using in DB queries
  2. Use ORM parameterization/prepared statements; avoid raw SQL with concatenation
  3. Ensure getVendors/getAssignees validate and sanitize incoming params
  4. Escape or encode vendor fields on render to prevent stored XSS
  5. Enforce authorization: verify requester may access the given orgId

🔴 apps/app/src/app/api/policies/[policyId]/chat/route.ts (HIGH Risk)

# Issue Risk Level
1 Unvalidated policyId used in DB query (SQL injection risk) HIGH
2 No validation of POST message body before processing HIGH
3 Missing role/permission checks for policy edits (insufficient authz) HIGH
4 No CSRF protection on POST endpoint (session-based auth) HIGH
5 No request size or rate limiting for model input (DoS/resource risk) HIGH

Recommendations:

  1. Validate and type-check policyId before using in DB queries. Use a schema validator (e.g., zod) to assert format (UUID or internal ID shape). Even when using an ORM, validate inputs early and avoid constructing raw queries with user input. Example: z.string().uuid().parse(policyId) or deny if it fails.
  2. Strictly validate and sanitize the POST body (messages). Define a schema for messages (max count, max message length, required fields) and enforce it before calling convertToModelMessages/streamText. Reject or truncate oversized inputs and log attempts.
  3. Enforce fine-grained authorization: check the member's role/permissions (e.g., owner/admin/editor) before allowing editing operations. Currently code only verifies membership (db.member.findFirst) but does not check edit rights. Add role checks or RBAC lookup and return 403 if unauthorized.
  4. Mitigate CSRF for session-based endpoints: require a same-site/secure cookie configuration and/or implement anti-CSRF tokens or require a custom header (e.g., X-Requested-With or an Authorization Bearer token). Also validate Origin/Referer for state-changing POST requests where appropriate.
  5. Add request size limits and rate limiting: enforce max payload size (e.g., body parser limit or middleware), limit number of messages and characters per message, and apply per-user and per-organization rate limits (token bucket, Redis-backed limiter). Also enforce model token limits and timeouts (maxDuration is present but ensure it's enforced server-side).
  6. When interacting with the database, prefer parameterized ORM queries (which most ORMs do) and avoid building raw SQL with concatenated inputs. If raw queries are necessary, use parameter binding.
  7. Log and monitor anomalous inputs and auth failures. Add alerts for repeated large requests or repeated authz failures to detect abuse early.

🟡 apps/portal/src/hooks/use-update-policy.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing validation for policyId, organizationId and content before sending MEDIUM
2 Unvalidated rich content may lead to stored XSS if server echoes it MEDIUM
3 policyId in URL can enable Insecure Direct Object Reference (IDOR) MEDIUM
4 No CSRF protection if API relies on cookie-based auth MEDIUM
5 No explicit auth header; relies on ambient cookies/session MEDIUM

Recommendations:

  1. Perform strict server-side validation for policyId (e.g., verify format/UUID), organizationId, and content. Never trust client-side checks alone.
  2. Sanitize or canonicalize rich text on the server before storing or rendering. Consider using a well-maintained sanitizer for the editor output or store a safe representation (strip script/style tags, disallow event handlers) and apply a Content-Security-Policy on pages that render user content.
  3. Enforce server-side authorization checks for every access and mutation of policies (ensure the authenticated user is allowed to access/modify the given policyId). Do not rely on obscurity of IDs in URLs.
  4. If the API uses cookie-based authentication, implement CSRF protections server-side: use anti-CSRF tokens, SameSite=strict/lax cookies as appropriate, and/or require custom request headers (with server verification). If using fetch with cookies, be explicit about credentials and CSRF handling.
  5. Consider using explicit authentication (e.g., Bearer tokens in Authorization header) when appropriate, or ensure session cookie handling is secure (HttpOnly, Secure, SameSite) and backed up by strict server-side auth checks.

🟡 packages/docs/openapi.json (MEDIUM Risk)

# Issue Risk Level
1 Trusts X-Organization-Id header for session auth MEDIUM
2 Session auth not defined in security schemes MEDIUM
3 PATCH lacks explicit security requirement MEDIUM
4 metadata typed as raw JSON string enabling unsafe parsing MEDIUM
5 No input validation (formats/regex/min/max) for fields MEDIUM
6 URLs (logo, website) lack uri format validation MEDIUM
7 User inputs (headers/body) may enable SQL injection if unsanitized MEDIUM
8 Fields lack size limits allowing oversized payloads MEDIUM

Recommendations:

  1. Do NOT rely on X-Organization-Id header alone for authorization. Server must verify session cookies and that the session belongs to the requested organization. Treat header values as untrusted input.
  2. Add an explicit session/cookie security scheme to components.securitySchemes (e.g., type: http, scheme: bearer or cookie-based scheme) and reference it in operations that support session auth.
  3. Ensure every state-changing operation (PATCH/POST/DELETE) includes an explicit security requirement in the OpenAPI spec and verify server-side enforcement of auth checks for each endpoint.
  4. Change metadata typed as string to a structured object (or explicit JSON schema) or add strict validation rules and require parsing with safe JSON parsers. Reject or validate arbitrary JSON before use.
  5. Add precise validation for inputs: formats (email, uri), regex where appropriate, numeric min/max, and maxLength for strings. Use OpenAPI schema constraints (maxLength, minLength, pattern, format, maximum, minimum) to document and enforce expectations.
  6. Set format: "uri" (or "url") on fields that should be URLs (logo, website) and validate on the server. Consider further validation (allowed schemes, host allowlist) if needed.
  7. Treat all user-supplied values (req.body, req.query, req.params, headers) as untrusted. Ensure backend uses parameterized queries/ORMs and never concatenates raw inputs into SQL or shell commands. Add input sanitization and DB-layer parameterization.
  8. Enforce size limits and rate/size controls: add maxLength for string fields, maximum array/item counts, and server-side limits for request body size to prevent oversized payload and DoS vectors.

🔴 packages/ui/src/components/ai-elements/message.tsx (HIGH Risk)

# Issue Risk Level
1 Possible XSS from Streamdown rendering untrusted message content HIGH
2 Unvalidated attachment URLs can leak user IPs or load malicious resources HIGH
3 User-supplied filenames/tooltips rendered without sanitization may inject UI HIGH

Recommendations:

  1. Sanitize/escape message content before rendering with Streamdown. If Streamdown/renderer supports an option to disable raw HTML, enable it; otherwise run content through a vetted sanitizer (e.g. DOMPurify) or use a markdown pipeline that strips/escapes HTML (rehype-sanitize, remark with safe-html disabled).
  2. If you must allow HTML/markdown from users, adopt a strict allowlist policy for tags/attributes and remove event handlers/JS URLs. Add unit/DOM tests to ensure sanitization covers edge cases (e.g. SVG with scripts, data: URIs).
  3. Validate and whitelist attachment URLs and schemes. Only allow https:// (or internal blob URLs you control). Disallow data: and javascript: schemes. For untrusted external URLs, proxy the resource via your server (so user IPs are not leaked to third parties) or host uploads internally.
  4. For image attachments consider forcing downloads through a safe proxy or fetching/validating content server-side (validate media type, strip scripts from SVGs) before exposing a URL to the client.
  5. Add referrerPolicy="no-referrer" and crossOrigin="anonymous" where appropriate for to reduce metadata leakage; however this does not hide client IP — proxying is required to prevent IP leakage.
  6. Apply a strong Content Security Policy (CSP) on the app (e.g. default-src 'self'; img-src 'self' https:; object-src 'none'; script-src 'self' 'nonce-...') to limit the impact of any injected content.
  7. Although React escapes interpolated strings (so filenames/tooltips rendered as children are not raw HTML), normalize and sanitize filenames to remove control characters and suspicious sequences; avoid dangerouslySetInnerHTML unless content is sanitized.
  8. Avoid trusting user-supplied React keys. Ensure keys for dynamic children are stable and generated server/client-side (e.g., use stable IDs), or fallback to index-based keys only when appropriate.
  9. Document and test the threat model for all user-supplied inputs (message bodies, filenames, attachment URLs) and run automated security tests (SAST/DAST) against the rendering components.

💡 Recommendations

View 3 recommendation(s)
  1. Upgrade vulnerable packages: update xlsx (0.18.5) to a patched release addressing GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9, and update ai to >=5.0.52 to address GHSA-rwvc-j5jr-mgvh.
  2. Add strict runtime validation for identifiers and route params before DB use (e.g., zod or class-validator): enforce formats for policyId/orgId (uuid or allowed slug), and validate request bodies (messages/searchParams) in endpoints such as apps/api/src/policies/*, apps/app/src/app/api/policies/[policyId]/chat/route.ts, apps/app/src/app/(app)/[orgId]/risk/.../getRisks.ts.
  3. Avoid raw/concatenated SQL and unparameterized queries: audit places noted (policies.controller.ts, getRisks.ts, get-vendors-action.ts and related data access code) and convert any raw query construction to parameterized ORM calls or Prisma parameter binding; also whitelist allowed sort/filter field names before building query objects.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 25, 2025

@vercel
Copy link

vercel bot commented Nov 25, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

2 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
app (staging) Skipped Skipped Nov 25, 2025 9:18pm
portal (staging) Skipped Skipped Nov 25, 2025 9:18pm

@CLAassistant
Copy link

CLAassistant commented Nov 25, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ Marfuen
❌ github-actions[bot]
You have signed the CLA already but the status is still pending? Let us recheck it.

* refactor(risk): update getRisks and getAssignees functions to accept orgId

* chore(policy-editor): gate policy ai assistant behind feature flag

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@vercel vercel bot temporarily deployed to staging – portal November 25, 2025 21:18 Inactive
@vercel vercel bot temporarily deployed to staging – app November 25, 2025 21:18 Inactive
@Marfuen Marfuen merged commit 56dfd67 into release Nov 25, 2025
8 of 9 checks passed
@claudfuen
Copy link
Contributor

🎉 This PR is included in version 1.64.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants