Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to release the candidate branch into production, which will trigger a deployment.
It was created by the [Production PR] action.

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@vercel
Copy link

vercel bot commented Nov 20, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

2 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
app (staging) Skipped Skipped Nov 20, 2025 10:43pm
portal (staging) Skipped Skipped Nov 20, 2025 10:43pm

@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 20, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

3 OSV/npm CVEs found (2 high, 1 low). Code paths accept or store HTML/SVG and raw metadata — risk of stored XSS and unsafe file handling.


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 6 file(s) with issues

🔴 apps/api/src/attachments/attachments.service.ts (HIGH Risk)

# Issue Risk Level
1 Relies on client-supplied MIME/extension for blocking HIGH
2 Blocked lists incomplete (HTML/SVG/XHTML not blocked) HIGH
3 No server-side content validation (magic bytes) HIGH
4 No malware/virus scanning before S3 upload HIGH
5 Potential stored XSS via HTML/SVG or HTML-like payloads HIGH
6 Base64 decoded before size limit check — OOM risk HIGH
7 S3 uploads lack SSE/ACL settings — potential data exposure HIGH
8 No Content-Disposition header enforced — inline execution risk HIGH
9 S3 metadata includes user input; sanitization unclear HIGH
10 Service methods lack explicit per-request auth checks HIGH
11 Detailed errors logged to console may leak info HIGH

Recommendations:

  1. Do not rely solely on client-supplied Content-Type or filename extension. Validate file contents server-side using magic bytes/signatures for accepted types.
  2. Add explicit deny-listing for dangerous document types (e.g., .html, .htm, .xhtml, .svg) or treat them as unsafe by default. Consider sanitizing or converting such files before serving.
  3. Integrate malware scanning (e.g., ClamAV, commercial cloud scanning) into the upload pipeline before writing to S3 or creating DB records.
  4. Avoid decoding large base64 payloads into memory before validating size. Validate base64 length/headers or accept streamed uploads (multipart/form-data) and enforce size limits on the stream to prevent OOM.
  5. Enforce encryption and restrictive object ACLs on upload: set ServerSideEncryption (SSE) and avoid public ACLs. Configure bucket policies to require SSE and least privilege.
  6. Set Content-Disposition: attachment (or controlled disposition) on S3 objects when generating downloads to reduce inline rendering/execution risk; consider setting ResponseContentDisposition on signed URLs or storing ContentDisposition metadata.
  7. Ensure S3 metadata and object keys are strictly sanitized and length-limited. Current sanitizeHeaderValue and sanitizeFileName help, but enforce maximum lengths and a whitelist of allowed characters.
  8. Ensure per-request authorization is enforced before any DB or S3 access. If controllers/guards perform checks, document and verify that every call path to this service is protected; otherwise add explicit checks here.
  9. Log minimal error information in production. Avoid logging full error objects/stack traces to console; use structured logging with appropriate redaction and different logging levels.
  10. Consider serving potentially executable files from a separate, restricted bucket or using a conversion/safe-preview service for user-uploaded documents that may contain active content.
  11. If possible, stream uploads directly to S3 via pre-signed POSTs with server-side validation hooks, or use multipart streaming with server-side checks so large uploads don't consume application memory.

🟡 apps/api/src/attachments/upload-attachment.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 Blocked MIME list defined but not enforced MEDIUM
2 Client-supplied MIME trusted; no magic-byte validation MEDIUM
3 Base64 payload has no size limit; potential DoS MEDIUM
4 fileName not sanitized; path traversal or unsafe chars allowed MEDIUM
5 No antivirus/malware scanning of uploaded content MEDIUM

Recommendations:

  1. Enforce BLOCKED_MIME_TYPES: explicitly check fileType against the blocked list on upload and reject matches before persisting or processing.
  2. Validate content magic bytes: decode a safe prefix of fileData and verify the file signature (magic bytes) matches the declared MIME/type; do not trust client-supplied MIME alone.
  3. Limit upload size: enforce a maximum base64 length and a maximum decoded byte size; reject or stream large uploads and set server-side limits to prevent DoS.
  4. Sanitize/whitelist filenames: normalize and validate fileName (reject path separators, .., control chars); consider generating server-side safe filenames/IDs and storing original name separately.
  5. Integrate scanning: run uploaded content through antivirus/malware scanning (or sandbox) before making files available or storing persistently.
  6. Perform server-side validation/enforcement in the upload handler (the DTO only declares structure/validation rules; ensure runtime checks are implemented in the controller/service).

🔴 apps/api/src/tasks/attachments.service.ts (HIGH Risk)

# Issue Risk Level
1 Client-supplied MIME/extension checks only; no content scanning HIGH
2 No authorization checks in service methods HIGH
3 Entire file buffered in memory; DoS/OOM risk HIGH
4 sanitizeFileName implementation missing/unspecified HIGH
5 S3 metadata stores identifiers that may leak sensitive data HIGH
6 Signed URLs issued to clients can be shared externally HIGH
7 console.error logging may leak sensitive information HIGH
8 No transactional rollback if S3 delete succeeds but DB delete fails HIGH
9 Content-type spoofing can allow stored XSS when served HIGH

Recommendations:

  1. Perform server-side content inspection (e.g., antivirus/AV scanning, file magic number checks) before storing the object. Do not rely solely on client-supplied MIME type/extension.
  2. Enforce authorization checks at the service layer (or earlier) to ensure the caller is permitted to act on the given organizationId/entityId. Verify user context and permissions.
  3. Avoid buffering entire files in memory for upload. Stream the upload (e.g., multipart upload, stream to S3) or enforce strict size limits and resource controls to mitigate OOM/DoS.
  4. Ensure filename sanitization both normalizes and enforces length limits and disallows path traversal. Consider additionally dropping user-controlled names from the S3 key and storing the original name only in DB/metadata if needed.
  5. Avoid storing sensitive identifiers in S3 object metadata. If metadata is required, minimize sensitive values and/or encrypt metadata or store sensitive mapping in a protected database only.
  6. Consider issuing download URLs only after an authenticated server-side check or proxying downloads through an authenticated endpoint. Shorten signed URL lifetime when possible and consider revocation patterns (e.g., store object versioning and rotate keys).
  7. Replace console.error with structured logging that redacts sensitive fields. Ensure logs do not contain full errors or secrets and use a centralized, access-controlled logging system.
  8. Use transactions/compensating actions: either delete DB record first (and only delete S3 if DB delete succeeds) or implement a reliable compensation (e.g., retry DB delete, mark for async cleanup, or re-upload on failure). Consider distributed transaction patterns or idempotent cleanup jobs.
  9. Do not trust client-provided Content-Type. Validate content, set restrictive Content-Disposition or Content-Type on S3 objects based on server-side detection, and serve downloadable content with 'Content-Disposition: attachment' when appropriate to avoid direct execution/rendering in browsers.

🔴 apps/api/src/tasks/dto/upload-attachment.dto.ts (HIGH Risk)

# Issue Risk Level
1 BLOCKED_MIME_TYPES constant is defined but never used HIGH
2 No enforced MIME whitelist or blocked-list check for fileType HIGH
3 fileData not validated against file signature (magic bytes) HIGH
4 fileName not sanitized; path traversal or special chars possible HIGH
5 No size limits on fileData; large base64 can cause DoS/outage HIGH
6 No antivirus/malware scanning of uploaded content HIGH
7 SVG (image/svg+xml) and other script-capable types not restricted HIGH

Recommendations:

  1. Apply a server-side MIME allowlist or use the existing BLOCKED_MIME_TYPES: explicitly reject fileType values that match blocked types and/or only accept a narrow allowlist (e.g., application/pdf, image/png, image/jpeg). Do not trust client-provided MIME type alone.
  2. Verify the actual decoded bytes by checking file signature (magic bytes) or using a robust detector (e.g., npm 'file-type' or 'mmmagic') and base acceptance on detected type rather than the client-supplied fileType.
  3. Sanitize and normalize fileName: reject path traversal characters (/, \), control characters, and sequences like ../; restrict to a safe charset (alphanumerics, dots, dashes, underscores) and enforce the 255 char limit server-side. Generate a server-side storage filename (or UUID) and store the original name only as metadata.
  4. Enforce strict size limits on both the Base64 string length and the decoded byte size. Prefer streaming uploads and server-side size checks to avoid memory bloat. Reject payloads that exceed configured limits.
  5. Integrate malware/antivirus scanning (e.g., ClamAV/clamd scanning, commercial scanning services) for uploaded content prior to persistent storage or processing.
  6. Explicitly block or sanitize script-capable types (e.g., image/svg+xml, text/html, application/javascript). For SVGs, either disallow or sanitize (remove scripts, inline event handlers) using a trusted sanitizer before storing/serving.
  7. Do not rely on DTO-level validation alone for security: enforce all checks in the upload handling code (controller/service) where you can decode Base64, inspect bytes, run scanners, and apply size/filename restrictions.
  8. Set safe serving policies: store files outside the webroot, serve via signed URLs or through a service that sets Content-Disposition: attachment and a safe Content-Type based on server-side detection, and ensure files are not served with executable extensions or as HTML.
  9. Either remove the unused BLOCKED_MIME_TYPES constant or implement it. Unused security-related constants are confusing and may indicate incomplete implementation.

🟡 apps/app/src/app/(app)/[orgId]/tasks/[taskId]/components/TaskBody.tsx (MEDIUM Risk)

# Issue Risk Level
1 Client-side only file validation (extension/size) — can be bypassed MEDIUM
2 No MIME type or magic-byte validation of uploads (client-side) — server must enforce MEDIUM
3 window.open(downloadUrl) without noopener/noreferrer — reverse tabnabbing risk MEDIUM
4 Unvalidated downloadUrl may open attacker-controlled URLs (no client-side validation of returned URL) MEDIUM
5 Displays raw error messages to users — potential information disclosure (internal error text can leak implementation ... MEDIUM
6 No server-side upload quotas/rate-limiting or virus scanning — resource exhaustion / malware risk MEDIUM

Recommendations:

  1. Enforce validation and scanning on the server for all uploaded files: extension whitelist, MIME-type checks, and magic-byte inspection; reject anything suspicious regardless of client-side checks.
  2. Do not rely on client-side extension/size checks; treat them as UX conveniences only.
  3. Ensure getDownloadUrl returns signed, single-use, same-origin (or otherwise restricted) URLs. Validate URLs on the client if needed and prefer serving files through a safe download endpoint under your control.
  4. When opening external links in a new tab, mitigate reverse tabnabbing: use a safe mechanism (e.g., create an with target='_blank' and rel='noopener noreferrer', or use window.open with appropriate features) or ensure the opened page cannot access window.opener.
  5. Avoid showing raw internal error messages to end users. Log detailed errors server-side, and surface only user-friendly, non-sensitive messages in the UI. Sanitize any user-supplied values that may be reflected back.
  6. Implement server-side quotas and rate-limiting per user/account and global limits. Integrate antivirus/malware scanning and content-disposition/content-security protections for stored files.

🔴 packages/docs/openapi.json (HIGH Risk)

# Issue Risk Level
1 IDOR via X-Organization-Id header (clients can supply arbitrary org IDs) HIGH
2 Inconsistent/missing security declaration for session auth (auth bypass risk) HIGH
3 Patchable sensitive fields (hasAccess, isFleetSetupCompleted, fleetDmLabelId) allow privilege escalation HIGH
4 metadata stored as raw JSON string — risk of stored XSS or injection if unsanitized HIGH
5 No input validation/whitelisting for URL fields (logo, website) — SSRF/XSS risk HIGH
6 API key auth lacks scopes/roles in spec — weak authorization granularity HIGH

Recommendations:

  1. Enforce server-side authorization: never trust client-supplied X-Organization-Id for access control. Validate that the authenticated session/API key is authorized for the requested organization and ignore or override client-supplied org identifiers where appropriate.
  2. Explicitly model and enforce session-based authentication in the OpenAPI spec (e.g., cookie or bearer scheme) and require security declarations for all endpoints that support session auth so implementers cannot accidentally omit checks.
  3. Restrict patchable fields. Disallow direct client updates to high‑risk properties (hasAccess, isFleetSetupCompleted, fleetDmLabelId) or require elevated admin scopes/consent and additional server-side checks/auditing for those changes.
  4. Treat metadata as structured data on the server side: validate JSON schema, strip unsafe content, and apply output encoding/escaping before rendering. If stored as a raw string, ensure strict sanitization and content-type handling when served to clients.
  5. Add explicit validation for URL fields (logo, website) — require URL format, enforce allowed schemes/hosts when fetching external resources, and implement SSRF protections (e.g., deny private IPs, use allowlists for domains).
  6. Enhance API key model with scopes/roles and document them in the OpenAPI spec. Enforce scope checks server-side to provide fine-grained authorization rather than a single all-powerful API key.

💡 Recommendations

View 3 recommendation(s)
  1. Upgrade vulnerable packages: update 'ai' to >=5.0.52 (per advisory) and update 'xlsx' to a patched release that resolves GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9; re-scan to confirm fixes.
  2. Server-side reject/sanitize script-capable uploads: in apps/api/src/attachments/attachments.service.ts and upload DTOs (apps/api/src/attachments/upload-attachment.dto.ts and apps/api/src/tasks/*), detect actual file type using magic-byte detection (e.g., file-type) on a safe prefix and explicitly reject or sanitize image/svg+xml, text/html, application/xhtml+xml before decoding/storing.
  3. Prevent stored XSS in metadata: in packages/docs/openapi.json endpoints and any persistence code, parse/validate metadata as structured JSON, whitelist allowed fields/types, strip or HTML-escape any values that could contain markup, and avoid storing raw HTML strings.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 20, 2025

@CLAassistant
Copy link

CLAassistant commented Nov 20, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ Marfuen
❌ github-actions[bot]
You have signed the CLA already but the status is still pending? Let us recheck it.

@vercel vercel bot temporarily deployed to staging – portal November 20, 2025 22:43 Inactive
@vercel vercel bot temporarily deployed to staging – app November 20, 2025 22:43 Inactive
@Marfuen Marfuen merged commit a5d7908 into release Nov 20, 2025
11 of 12 checks passed
@claudfuen
Copy link
Contributor

🎉 This PR is included in version 1.61.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants