Skip to content

Conversation

@github-actions
Copy link
Contributor

This is an automated pull request to release the candidate branch into production, which will trigger a deployment.
It was created by the [Production PR] action.

#1751)

* feat(security-questionnaire): add AI-powered questionnaire parsing and auto-answering functionality

* feat(frameworks): enhance FrameworksOverview with badge display and improve Security Questionnaire layout

- Update FrameworksOverview to conditionally render badges or initials based on availability.
- Refactor Security Questionnaire page to include a breadcrumb navigation and improve layout for better user experience.
- Enhance QuestionnaireParser with new alert dialog for exit confirmation and streamline question answering process.
- Improve UI components for better accessibility and responsiveness.

* refactor(security-questionnaire): improve QuestionnaireParser layout and styling

- Update search input styling for better visibility and responsiveness.
- Adjust layout of command bar components for improved user experience.
- Streamline button functionalities and ensure consistent styling across export options.

* feat(security-questionnaire): enhance UI and functionality of Security Questionnaire page

- Adjust padding and layout for improved responsiveness on the Security Questionnaire page.
- Update header styles for better visibility and consistency.
- Implement download functionality for questionnaire responses with enhanced user feedback.
- Refactor question and answer display for better organization and accessibility across devices.
- Improve button and input styling for a more cohesive user experience.

* feat(security-questionnaire): enhance QuestionnaireParser UI and functionality

- Update tab trigger styles for improved visibility and consistency.
- Refactor file upload and URL input sections for better user experience.
- Enhance dropzone component with clearer instructions and improved styling.
- Streamline action button layout and functionality for better accessibility.

* refactor(security-questionnaire): streamline action button layout in QuestionnaireParser

- Reorganize action button section for improved clarity and consistency.
- Maintain existing functionality while enhancing the overall UI structure.

* feat(security-questionnaire): implement feature flag checks for questionnaire access

- Add feature flag checks to control access to the AI vendor questionnaire.
- Remove FeatureFlagWrapper component and directly use QuestionnaireParser.
- Update header, sidebar, and mobile menu to conditionally render questionnaire options based on feature flag status.

* refactor(api): simplify run status retrieval logic in task status route

---------

Co-authored-by: Tofik Hasanov <annexcies@gmail.com>
Co-authored-by: Claudio Fuentes <imclaudfuen@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 14, 2025

Comp AI - Code Vulnerability Scan

Analysis in progress...

Reviewing 30 file(s). This may take a few moments.


Powered by Comp AI - AI that handles compliance for you | Reviewed Nov 17, 2025, 09:07 PM

@vercel
Copy link

vercel bot commented Nov 14, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
app (staging) Ready Ready Preview Comment Nov 17, 2025 11:09pm
1 Skipped Deployment
Project Deployment Preview Comments Updated (UTC)
portal (staging) Skipped Skipped Nov 17, 2025 11:09pm

…improvements (#1752)

- Add automation activity feed and indicator components for task automation status
- Implement modern task list and category views for better task organization
- Introduce search input for filtering tasks
- Refactor task status handling and improve UI elements for better user experience
- Update task body and status selector components for enhanced functionality

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 14, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV scan found 3 package CVEs (xlsx@0.18.5: two HIGH CVEs; ai@5.0.0: LOW CVE). Code contains a hardcoded token and multiple injection vectors (shell/header/SQL) in device-agent and context code.


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 15 file(s) with issues

🔴 .github/workflows/auto-pr-to-main.yml (HIGH Risk)

# Issue Risk Level
1 GITHUB_TOKEN has write perms to contents/pull-requests/issues HIGH
2 Workflow likely runs on self-hosted runner exposing secrets HIGH
3 continue-on-error:true masks failures and hides errors HIGH
4 Third-party action repo-sync/pull-request@v2 not pinned to commit HIGH
5 Triggers on many user branch patterns enabling automated PRs HIGH

Recommendations:

  1. Least-privilege GITHUB_TOKEN: restrict permissions to only what's required. If the workflow only needs to create PRs, remove 'contents: write' and 'issues: write' unless explicitly required. Explicitly set permissions at job level and avoid granting write to broad scopes.
  2. Self-hosted runner safety: if 'warp-ubuntu-latest-arm64-4x' is a self-hosted runner, avoid running workflows that expose secrets on untrusted runners. Use GitHub-hosted runners or ensure the self-hosted runner is fully trusted, isolated, and up-to-date. Consider restricting which branches/events run on self-hosted runners.
  3. Remove or limit continue-on-error: don't use continue-on-error: true at the job level. Fail fast or handle expected non-critical failures at specific steps so issues are visible and do not silently bypass required checks.
  4. Pin third-party actions: replace repo-sync/pull-request@v2 with a specific commit SHA (e.g., repo-sync/pull-request@) to avoid remote supply-chain changes. Optionally add a verification step or use a vetted action maintained by your organization.
  5. Tighten triggers and approvals: reduce the broad branch globbing (especially user-specific patterns) or require an approval step/protected branch rules before automated merges. Consider limiting this workflow to trusted branches or using workflow_dispatch for manual control.

🟡 SELF_HOSTING.md (MEDIUM Risk)

# Issue Risk Level
1 Secrets stored in .env (risk of committing or host compromise) MEDIUM
2 DATABASE_URL with plaintext credentials can leak via logs or repo MEDIUM
3 AUTH_SECRET or BETTER_AUTH_SECRET missing/weak enables auth bypass MEDIUM
4 Resend/TRIGGER/OpenAI API keys may be exposed to clients if misused MEDIUM
5 NEXT_PUBLIC_* vars can leak sensitive info if misconfigured MEDIUM
6 Docker build args or env can bake secrets into image layers MEDIUM
7 No HTTPS/TLS enforcement; default http URLs are insecure MEDIUM
8 Postgres sslmode=require lacks server cert validation (use verify-full) MEDIUM
9 No default rate limiting; brute force and abuse risk MEDIUM
10 Migrator/seeder run with DB rights can alter production schema if mispointed MEDIUM
11 Long‑lived AWS keys in env increase blast radius if leaked MEDIUM

Recommendations:

  1. Do not store secrets in plaintext .env files in source trees. Use a secrets manager (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager) or Docker secrets and ensure .env is in .gitignore. Enforce pre-commit hooks to prevent committing secrets.
  2. Avoid embedding plaintext DB credentials in repos or logs. Use CI/CD/secret stores to inject DATABASE_URL at deploy/runtime. Prefer ephemeral DB credentials or IAM-based auth where supported.
  3. Require and enforce strong, distinct AUTH_SECRET and BETTER_AUTH_SECRET values. Validate presence at startup and fail-fast if missing or weak (e.g., length check). Rotate secrets periodically.
  4. Keep API keys (Resend, TRIGGER_SECRET_KEY, OPENAI_API_KEY, DUB_API_KEY, AWS keys) server-side only. Never put them into NEXT_PUBLIC_* env variables or client bundles. Audit code and build pipeline for accidental exposure.
  5. Treat NEXT_PUBLIC_* vars as public by design. Do not place sensitive values in any NEXT_PUBLIC_* variable. Move any sensitive config to server-only envs.
  6. Prevent secrets from being baked into images: do not ARG/ENV secrets at build time. If build-time secrets are required, use Docker BuildKit secret mounts or supply secrets at runtime. Scan images for embedded secrets.
  7. Enforce HTTPS in production. Use real domains, TLS certificates (Let’s Encrypt or managed certs), and configure secure cookies (Secure, SameSite) and HSTS headers.
  8. Use Postgres sslmode=verify-full (or equivalent) and provide a trusted CA to validate server certs. Document connection string requirements and consider enforcing Postgres host verification.
  9. Implement rate limiting and abuse controls (per-IP, per-account login attempts). Use Redis/WAF or API gateway rate limits, account lockouts and monitoring/alerting for brute-force patterns.
  10. Run migrator/seeder only against the intended target. Require an explicit production confirmation/flag and use least-privileged DB accounts for runtime services; reserve migration rights to a separate privileged account used only in controlled CI/CD steps.
  11. Avoid storing long-lived AWS keys in env. Use short-lived credentials (IAM roles, STS), instance/profile credentials, or a secrets manager. Regularly rotate keys and monitor usage; restrict S3 bucket policies to least privilege.

🟡 apps/.cursor/rules/trigger.basic.mdc (MEDIUM Risk)

# Issue Risk Level
1 Hardcoded token in wait.forToken: "user-approval-token" MEDIUM
2 Missing input validation in tasks: process-data, parent-task, child-task, task-with-waits MEDIUM
3 Use of any[] for payload.data in process-data (untyped/unvalidated) MEDIUM
4 Logging potentially sensitive payloads (userId, data) to console MEDIUM

Recommendations:

  1. Remove hardcoded tokens. Load tokens/secrets from environment variables or a secrets manager (e.g., process.env.USER_APPROVAL_TOKEN or a dedicated secrets store). Rotate any exposed token and treat the current value as compromised until rotated.
  2. Validate all external task payloads using schemaTask or runtime validators (e.g., zod). Convert tasks that accept external data to use schemaTask with strict schemas to ensure expected shapes and types.
  3. Replace any[] with explicit types or enforce structure via schemas. For example, define an Item type or use z.array(z.object({...})) so payloads are strongly typed and validated at runtime.
  4. Avoid logging sensitive fields (user IDs, tokens, full payloads). Log only non-sensitive metadata (counts, ids hashed/pseudonymized) or redact sensitive fields before logging. Use structured logging with automatic redaction where possible.

🔴 apps/api/buildspec.yml (HIGH Risk)

# Issue Risk Level
1 Remote script execution via curl bash (bun installer)
2 Unverified upstream installer (no checksum or pinned version) HIGH
3 Dependency installs may run package lifecycle scripts (RCE risk) HIGH
4 Build context may include sensitive files copied into image HIGH
5 Cached node_modules and bun cache may persist secrets or tainted deps HIGH
6 CI logs print file listings and APP_NAME, risking info disclosure HIGH

Recommendations:

  1. Remove direct piping of remote install scripts. Instead download a pinned release artifact (HTTPS), verify its checksum or GPG signature, then run the installer locally. Example: curl -fSL https://bun.sh/releases/bun-.tar.gz -o bun.tar.gz && echo ' bun.tar.gz' | sha256sum -c - && tar -xzf bun.tar.gz && install.
  2. Pin installer versions and verify integrity. Do not rely on the network installer bootstrap URL without checksum/GPG verification.
  3. Run dependency installation in an isolated environment or CI step with --ignore-scripts where possible. If scripts are required, audit lifecycle scripts of dependencies and restrict network / permissions during install.
  4. Minimize Docker build context and use a .dockerignore. Only copy required runtime artifacts into ../docker-build (avoid copying .env, config, credentials, or repository root files). Use explicit COPY statements in Dockerfile rather than copying large directories.
  5. Limit caching of node_modules and bun caches that may persist secrets or compromised packages. Scope cache paths narrowly, rotate caches periodically, or avoid caching sensitive directories. Consider rebuilding cleanly in CI for sensitive builds.
  6. Avoid printing sensitive environment values and full directory listings in CI logs. Remove or redact echo "APP_NAME is set to $APP_NAME" and reduce use of ls -la for sensitive directories. Use masked environment variables in the CI system for secrets.

🟡 apps/api/src/attachments/attachments.service.ts (MEDIUM Risk)

# Issue Risk Level
1 AWS creds from env used to construct S3 client (long-lived credentials risk) MEDIUM
2 Client-provided Content-Type is trusted without validation MEDIUM
3 Service returns signed URLs for all attachments (mass exposure) MEDIUM
4 S3 metadata stores org/entity IDs and uploadedBy, may leak sensitive info MEDIUM
5 No server-side content scanning for malware or dangerous files MEDIUM
6 Console.error logs may expose sensitive error details or creds MEDIUM
7 Authorization relies only on caller-supplied organizationId MEDIUM

Recommendations:

  1. Use short-lived credentials / IAM role-based access (e.g., AWS SDK default provider chain, EC2/ECS/EKS task roles, or STS-assumed roles) instead of embedding long-lived accessKeyId/secretAccessKey in environment variables. Rotate credentials regularly and restrict scope/permissions to the minimum necessary (principle of least privilege).
  2. Validate uploaded content server-side: do not trust client-supplied Content-Type. Detect MIME by inspecting file magic bytes (libmagic or equivalent), enforce a whitelist of allowed types, and normalize/override Content-Type when putting to S3. Block or sandbox risky types (HTML, JS, executables) and enforce file size limits (already present).
  3. Avoid returning bulk signed URLs by default. Generate signed URLs on-demand per authenticated request, shorten expiry where possible, and enforce authorization checks per-request. Consider paginating/limiting returned attachments and only include metadata by default.
  4. Minimize sensitive metadata stored in S3 object metadata. Remove or redact organizationId/entityId/uploadedBy from S3 metadata if not required; store that mapping only in the database. If metadata must be stored, consider encrypting sensitive values and ensure bucket/object ACLs and IAM policies restrict who can read metadata.
  5. Integrate server-side content scanning (antivirus/malware scanners / virus scanning services) before storing or serving files. For high-risk content, quarantine and require manual review. Consider using sandboxing for processing untrusted files.
  6. Avoid logging raw error objects. Remove or redact sensitive fields from logs (stack traces, credential strings, request bodies). Use structured logging with log levels and a secure logging pipeline, and ensure logs are access-controlled and scrubbed of secrets.
  7. Enforce authorization at the service boundary: do not trust caller-supplied organizationId alone. Derive organizationId from authenticated principal or validate that the caller's identity has access to the requested organization/entity. Add explicit ownership/permission checks before returning or deleting attachments.

🟡 apps/api/src/comments/dto/update-comment.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 No input sanitization for 'content' (stored XSS risk) MEDIUM
2 No protection if 'content' used in raw DB queries (SQL injection risk) MEDIUM

Recommendations:

  1. Sanitize/escape content before storing or rendering
  2. Use parameterized queries or ORM methods to prevent SQL injection
  3. Apply HTML sanitizer (e.g., DOMPurify, sanitize-html) on content
  4. Enforce server-side input normalization and allowed characters
  5. Encode outputs when rendering comments in HTML

🔴 apps/api/src/context/context.controller.ts (HIGH Risk)

# Issue Risk Level
1 Potential SQL injection via contextId in GET/PATCH/DELETE endpoints HIGH
2 Potential injection via create/update DTOs if unsanitized HIGH
3 Missing validation of request params and bodies before service calls HIGH
4 Organization scoping may be bypassed because X-Organization-Id is optional HIGH
5 Sensitive user info (email/id) returned in API responses HIGH

Recommendations:

  1. Use parameterized queries/ORM methods; avoid string interpolation in DB calls
  2. Validate and sanitize all params and DTOs with class-validator and ValidationPipe
  3. Enforce server-side organization scoping and require/verify X-Organization-Id
  4. Remove or redact PII from responses unless strictly required
  5. Add authorization checks ensuring acting user belongs to the organization

🔴 apps/api/src/context/context.service.ts (HIGH Risk)

# Issue Risk Level
1 No input validation for DTOs and params HIGH
2 Update endpoint allows mass-assignment (orgId can be changed) HIGH
3 Update/delete DB calls use only id; missing org filter (TOCTOU risk) HIGH
4 Service trusts caller-supplied organizationId (missing auth) HIGH
5 Logs include question content and IDs; risk of data leakage HIGH
6 Re-throwing raw errors may expose internal details HIGH

Recommendations:

  1. Validate and sanitize DTOs and params (class-validator / Zod)
  2. Derive and verify organizationId from authenticated user
  3. Include organizationId in update/delete where clauses
  4. Whitelist updatable fields; strip organizationId from updates
  5. Redact/sanitize logs and return sanitized error messages

🟡 apps/api/src/context/dto/context-response.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 No runtime validation on DTO fields (missing class-validator constraints) MEDIUM

Recommendations:

  1. Add class-validator decorators to DTO fields (e.g., @IsString, @IsDateString, @IsUUID where appropriate).
  2. For arrays, use @isarray() and validate items with @IsString({ each: true }) or appropriate item validators.
  3. Enable NestJS ValidationPipe globally (app.useGlobalPipes(new ValidationPipe({ whitelist: true, transform: true }))) to enforce DTO validation at runtime.
  4. Use @type(() => Date) from class-transformer and @IsDateString or @isdate for date fields to ensure correct parsing and validation.
  5. Separate response-only DTOs from request/input DTOs. If this file is used only for responses, ensure there are dedicated input DTOs with validation for all incoming data paths.
  6. Whitelist/strip unknown properties and consider sanitization middleware for inputs that will be persisted or used in sensitive operations.

🟡 apps/api/src/context/dto/create-context.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 No input length limits on question and answer MEDIUM
2 No input sanitization; raw fields enable XSS/SQL injection if reused MEDIUM
3 Tags array not whitelisted or constrained MEDIUM

Recommendations:

  1. Add length validators to text fields, e.g. @maxlength(2000) / @minlength(...) or @Length(min, max) for question and answer to limit payload size and reduce abuse/DoS risk.
  2. Trim and normalize incoming strings using class-transformer @Transform((v) => v?.trim()) so stored values are normalized.
  3. Validate and constrain the tags array: enforce @ArrayMaxSize(n), @ArrayMinSize(0), @IsString({ each: true }), and either @matches(/pattern/) or @isin([...allowedTags]) (or a custom validator) to whitelist acceptable tag values.
  4. Sanitize or escape content before storing or rendering: use a well-known sanitizer (e.g. 'xss' or 'sanitize-html') or HTML-escape fields on output. If you allow markup (Markdown/HTML), render it through a safe renderer and apply a strict Content Security Policy (CSP).
  5. Ensure backend storage/queries use parameterized queries/ORM APIs (avoid string concatenation) so even if input contains SQL-like content it can't produce SQL injection.
  6. Enforce server-side request size limits (body-parser limits) and rate-limiting to mitigate large or abusive payload submission.
  7. Add unit/integration tests that assert validators run (e.g., sending overlong strings, invalid tags, and malicious payloads) so regressions are caught.
  8. Consider logging and alerting when unexpectedly large or malformed payloads are received so you can detect and respond to abuse.

🟡 apps/api/src/device-agent/device-agent.controller.ts (MEDIUM Risk)

# Issue Risk Level
1 Unsanitized filename used in Content-Disposition header MEDIUM
2 OrganizationId not validated against auth context (broken access control) MEDIUM
3 User-supplied orgId/employeeId passed to service without validation MEDIUM
4 Trusting contentType from service (content-type spoofing risk) MEDIUM

Recommendations:

  1. Sanitize/validate filename; remove CRLF/quotes and use safe fallback names
  2. Verify organizationId matches authenticated user's scope/entitlements
  3. Validate/sanitize organizationId and employeeId before service calls
  4. Ensure deviceAgentService enforces access control and parameterized queries
  5. Whitelist contentType or map allowed types; avoid trusting service input

🔴 apps/api/src/device-agent/device-agent.service.ts (HIGH Risk)

# Issue Risk Level
1 Unsanitized orgId/employeeId in generated script (command injection) HIGH
2 Missing input validation for organizationId and employeeId HIGH
3 Sensitive identifiers logged (orgId, employeeId) to logger HIGH
4 No authorization checks in service; callers may request arbitrary agent HIGH
5 No S3 object size or timeout checks; possible DoS/resource exhaustion HIGH

Recommendations:

  1. Validate and strictly whitelist organizationId and employeeId at the API boundary (controller). Reject or canonicalize any values that do not match an expected pattern (e.g., UUIDs or numeric IDs).
  2. Sanitize or escape values before embedding into generated shell/batch scripts. Better approaches: (a) avoid direct interpolation into shell source — write IDs to a separate data file that the script reads, (b) base64-encode values server-side and decode inside the script, or (c) use a strict encoding/whitelisting function that removes/escapes characters meaningful to the shell (quotes, newlines, &, |, >, <, ^, etc.).
  3. Do not log raw organizationId/employeeId. Mask or hash identifiers when logging (e.g., log a truncated hash or internal non-identifying token) and treat these values as sensitive in log retention/rotation policies.
  4. Enforce authorization checks before creating/downloading agents. The service should verify the caller is authenticated and authorized to request artifacts for the given organizationId/employeeId (ownership or admin role).
  5. Protect S3 streaming and archive creation from resource exhaustion: check S3 object's ContentLength before streaming and reject objects above a configured maximum; use AbortController/timeout for S3 requests; monitor and limit archive size; enforce request rate limits; and apply per-request CPU/memory quotas where possible.
  6. Consider signing produced artifacts (zip) or serving prebuilt signed releases to avoid embedding untrusted values into executable content; validate/scan generated artifacts before serving to end-users.

🟡 apps/api/src/devices/devices.service.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing input validation for organizationId and memberId MEDIUM
2 Potential sensitive info leakage via logs and thrown error messages MEDIUM
3 No validation of fleetService responses before use MEDIUM
4 No limit on hostIds allowing very large requests MEDIUM

Recommendations:

  1. Validate and sanitize organizationId and memberId at the API boundary (controllers/DTOs). Enforce type and format (e.g., UUID) and length checks. Use NestJS DTOs with class-validator or explicit validation functions.
  2. Avoid logging raw error objects or full IDs. Mask or omit sensitive identifiers in logs. When throwing errors to clients return generic messages; log detailed errors to secure log storage only.
  3. Validate external service responses (fleetService) before using them: check that labelHosts is an object, labelHosts.hosts is an array, host IDs are numbers, and returned devices match expected schema. Fail gracefully when schema is unexpected.
  4. Enforce limits/pagination on hostIds and cap the size of hostIds passed to fleetService.getMultipleHosts. Apply rate limiting and guardrails to prevent processing extremely large arrays (e.g., maximum batch size, streaming, or background processing).
  5. Consider defensive coding: wrap external calls with timeouts, circuit breakers, and input sanitization to reduce DoS and unexpected-data risks.

🟡 apps/api/src/devices/dto/device-responses.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing runtime input validation on DTOs (no class-validator checks) MEDIUM
2 Unrestricted object fields (issues, mdm, software, users allow arbitrary props) MEDIUM
3 Policy.query may accept raw SQL; risk of SQL injection if executed with user input MEDIUM
4 DTO exposes sensitive fields (hardware_serial, public_ip, primary_mac, author_email) MEDIUM

Recommendations:

  1. Add runtime validation to DTOs using class-validator / class-transformer. Add decorators like @IsString, @IsNumber, @isemail, @isboolean, @IsOptional, @maxlength, @min, etc.
  2. Enable a global ValidationPipe with options { whitelist: true, forbidNonWhitelisted: true, transform: true } to strip unexpected properties and enforce types at runtime.
  3. Replace untyped object[] / Record<string, unknown> usages with explicit DTOs (ValidateNested + @type) or restrict allowed keys. If truly dynamic, validate payload shape and enforce additionalProperties: false where possible.
  4. For issues/mdm/software/users/pack_stats/labels/packs/batteries/end_users: define explicit schemas or run a whitelist/shape validation before persisting or returning these objects.
  5. Treat FleetPolicyDto.query as potentially executable input: do not execute raw SQL from stored strings. If you must, only allow it for trusted admin users, parse and validate the statement, or map to parameterized queries / prepared statements. Prefer storing structured query templates + parameters instead of raw SQL.
  6. Implement strict controls for any code path that might execute user-supplied SQL (audit call sites that accept policy.query and ensure parameterization or sandboxing).
  7. Mask or omit sensitive PII from API responses where not required: consider separate public DTOs that exclude hardware_serial, public_ip, primary_mac, author_email, etc. Use role-based access control to restrict who can see PII.
  8. Sanitize and redact sensitive fields in logs and error messages. Apply data-retention and minimization policies for PII.
  9. Enforce size and complexity limits on incoming objects/strings to reduce abuse and DoS vectors (maxLength, array max items).
  10. Add unit/integration tests that exercise validation boundaries and ensure unexpected properties are rejected.

🟢 apps/api/src/framework-editor/task-template/pipes/validate-id.pipe.ts (LOW Risk)

# Issue Risk Level
1 No maximum length check on ID allowing very long inputs LOW

Recommendations:

  1. Enforce a reasonable maximum length for the ID (e.g., <= 64 characters).
  2. Update the regex to include a length bound, e.g. /^frk_tt_[a-z0-9]{1,56}$/i (adjust upper bound to meet total max length accounting for prefix).
  3. Reject inputs that exceed the maximum length in the pipe (throw BadRequestException).
  4. Use a validation library (e.g., class-validator with @Length) if you want declarative constraints across DTOs.
  5. Normalize/escape values before using them in DB queries, command execution, or logs.
  6. Add request rate limiting and payload size limits to mitigate large payload / DoS abuse.
  7. Add unit tests to validate acceptance and rejection of edge-case lengths.

💡 Recommendations

View 3 recommendation(s)
  1. Remediate OSV findings: upgrade xlsx@0.18.5 to a patched release that fixes GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9, and upgrade ai from 5.0.0 to >=5.0.52 (the fix listed). Re-run tests and the scan after upgrades.
  2. Remove the hardcoded token in apps/.cursor/rules/trigger.basic.mdc (the literal "user-approval-token"). Read the token from runtime configuration (e.g., process.env) and validate its presence at startup; treat the exposed token as compromised and rotate it.
  3. Fix injection points: (a) In apps/api/src/device-agent/* validate/whitelist organizationId and employeeId (e.g., UUID regex/length) and never directly interpolate them into generated shell scripts—encode or write to a data file consumed by the script. (b) In apps/api/src/device-agent/device-agent.controller.ts sanitize filenames used in Content-Disposition (strip CR/LF, quotes, unsafe chars). (c) In apps/api/src/context/* use parameterized ORM/queries for contextId and validate/sanitize all request params/DTOs to prevent SQL injection.

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 17, 2025

@CLAassistant
Copy link

CLAassistant commented Nov 14, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
2 out of 3 committers have signed the CLA.

✅ Marfuen
✅ Itsnotaka
❌ github-actions[bot]
You have signed the CLA already but the status is still pending? Let us recheck it.

…1739)

* feat(trust-access): implement trust access request management system

* feat(trust-access): implement trust access request management system

* feat: clean up logs

* feat(trust-access): validate user ID from member ID in trust access service

* feat(trust-access): enhance member ID handling and user verification

* feat: added trust page navigation

* feat(api): update dependencies and add new packages for nanoid and pdf-lib

* feat(trust-access): implement NDA signing email resend functionality

* feat(trust-access): add endpoint to preview NDA with watermark

* feat(trust-access): add reclaim access endpoint and email notification

* feat(trust-access): add document access and download endpoints with email updates

* feat(trust-access): integrate @tanstack/react-form for access request dialogs

* feat(trust-access): enforce scope selection and validate NDA status

* feat(trust-access): update approve button to include request details

* fix: import fix for monorepo

* fix: build fix

* feat(trust-access): add requested duration days to access request

* refactor(trust-access): remove scopes from access request and related services

* feat(trust-access): implement policies access and download endpoints

* chore: format

* refactor(trust-access): rename TrustAccessRequestsClient and remove unused components

* style(trust-access): update dialog components for consistent layout

* chore(workflows): add daniel/* to auto PR trigger paths

* chore(db): refactor migration

---------

Signed-off-by: Mariano Fuentes <marfuen98@gmail.com>
Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
…ng and auto-answering (#1755)

- Add questionnaire file upload and parsing functionality
- Implement AI-powered question extraction from PDFs
- Add auto-answer functionality using RAG with vector embeddings
- Add questionnaire results display, editing, and export
- Implement vector embedding sync for organization policies
- Add batch processing for questionnaire answers
- Fix TypeScript build error in analytics package
- Resolve merge conflict in bun.lock

Co-authored-by: Tofik Hasanov <annexcies@gmail.com>
* feat(nav): add feature flag to conditionally render Trust tab

* feat(nav): add feature flag to conditionally render Trust tab

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
* fix(api): resolve inefficient regex pattern in domain validation

* feat(ui): add field component to package.json exports

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
…ads to S3 (#1758)

Co-authored-by: Tofik Hasanov <annexcies@gmail.com>
Co-authored-by: Tofik Hasanov <annexcies@gmail.com>
…shed policies (#1761)

Co-authored-by: Tofik Hasanov <annexcies@gmail.com>
@vercel vercel bot temporarily deployed to staging – portal November 17, 2025 22:45 Inactive
* feat(security-questionnaire): add loading component and sidebar for questionnaire

* feat(security-questionnaire): simplify layout and enhance QuestionnaireParser component

* chore(security-questionnaire): update CTA text and tooltip for policy publishing

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@comp-ai-code-review
Copy link

comp-ai-code-review bot commented Nov 17, 2025

🔒 Comp AI - Security Review

🔴 Risk Level: HIGH

OSV findings: xlsx@0.18.5 has two HIGH GHSA issues; ai@5.0.0 has a LOW GHSA issue. Repo contains a hardcoded sample DATABASE_URL (user:pass). Multiple code files show script/header/DB injection risks.


📦 Dependency Vulnerabilities

🟠 NPM Packages (HIGH)

Risk Score: 8/10 | Summary: 2 high, 1 low CVEs found

Package Version CVE Severity CVSS Summary Fixed In
xlsx 0.18.5 GHSA-4r6h-8v6p-xvw6 HIGH N/A Prototype Pollution in sheetJS No fix yet
xlsx 0.18.5 GHSA-5pgg-2g8v-p4x9 HIGH N/A SheetJS Regular Expression Denial of Service (ReDoS) No fix yet
ai 5.0.0 GHSA-rwvc-j5jr-mgvh LOW N/A Vercel’s AI SDK's filetype whitelists can be bypassed when uploading files 5.0.52

🛡️ Code Security Analysis

View 14 file(s) with issues

🔴 .env.example (HIGH Risk)

# Issue Risk Level
1 AUTH_SECRET is empty (auth secret missing) HIGH
2 REVALIDATION_SECRET is empty (revalidation secret missing) HIGH
3 Insecure HTTP URLs for portals (http used instead of https) HIGH
4 AUTH_TRUSTED_ORIGINS allows wildcard and http origins HIGH
5 Duplicate TRIGGER_SECRET_KEY variable HIGH
6 Env file can expose plaintext secrets if committed to repo HIGH

Recommendations:

  1. Set strong, high-entropy secrets for AUTH_SECRET and REVALIDATION_SECRET (e.g., openssl rand -base64 32) and enforce presence at startup (fail fast if missing).
  2. Use HTTPS for NEXT_PUBLIC_PORTAL_URL and TRUST_APP_URL in production. Keep HTTP-only localhost defaults for local dev but ensure environment-specific config enforces HTTPS for prod.
  3. Restrict AUTH_TRUSTED_ORIGINS to exact origins where possible (no wildcards). Do not include http:// origins for production; only include localhost for dev.
  4. Remove duplicate TRIGGER_SECRET_KEY entry from the .env.example and verify the runtime expects the intended variable name. Document canonical env var names to avoid overrides and confusion.
  5. Never commit real .env files with secrets to version control. Add .env to .gitignore, keep .env.example as a template with placeholders (no real secrets), and use a secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault, etc.) or CI/CD secret injection for production.
  6. Add startup validation checks that verify required secrets/keys/URLs are set and have reasonable formats (e.g., non-empty, HTTPS for prod). Consider automated scanning CI checks to block commits containing actual secrets.

🔴 .github/workflows/auto-pr-to-main.yml (HIGH Risk)

# Issue Risk Level
1 Unpinned third-party actions (repo-sync@v2, checkout@v3) enable supply-chain risk HIGH
2 GITHUB_TOKEN passed to external action can be exfiltrated or abused HIGH
3 Permissions overly broad: contents/write and issues/write HIGH
4 Uses self-hosted runner 'warp-ubuntu-latest-arm64-4x' exposing secrets to host HIGH
5 continue-on-error: true hides failures and may mask malicious steps HIGH
6 Automated PRs from pushed branches can bypass review controls HIGH
7 PR title/body use actor/ref without sanitization (injection/spoofing) HIGH

Recommendations:

  1. Pin all third-party actions to specific commit SHAs (use full git commit SHA) instead of mutable tags (e.g., actions/checkout@, repo-sync/pull-request@) to mitigate supply-chain changes.
  2. Avoid passing the repository GITHUB_TOKEN to untrusted third-party actions. If the action must run, vendor/verify the action code or use a minimal-scoped token created for a specific purpose (rotate and audit it). Consider using OIDC and short-lived tokens where applicable.
  3. Use least-privilege permissions: remove unnecessary permissions (e.g., drop contents: write and issues: write unless explicitly required). Scope permissions to the minimal set and only for the steps that need them using per-step permissions if possible.
  4. If possible, run this job on GitHub-hosted runners. If a self-hosted runner is required, harden and isolate it (ephemeral runners, runner autoscaling, network isolation, audited host, and limit which repos can use the runner).
  5. Remove continue-on-error: true at the job level. Fail fast so any unexpected behavior is visible. If specific steps should be allowed to fail, mark those steps explicitly with continue-on-error and document why.
  6. Restrict who/what can trigger PR creation: filter by actor, require branch protections on the destination (require reviews, status checks, require pull request reviews before merging), and avoid auto-merging unreviewed PRs. Consider adding an allowlist for actors that can trigger this workflow.
  7. Sanitize/validate interpolated values used in PR titles/bodies (github.actor, github.ref_name). At minimum, whitelist acceptable branch name patterns and/or escape or strip control characters. Also consider constructing a safer static PR message and including non-sensitive dynamic data only after validation.

🔴 SELF_HOSTING.md (HIGH Risk)

# Issue Risk Level
1 Sample DATABASE_URL contains hardcoded credentials (user:pass) HIGH
2 .env file recommended for secrets — risk of accidental commits/exposure HIGH
3 Sensitive keys passed via env_file to containers (accessible in container env) HIGH
4 RESEND_API_KEY duplicated in app and portal increases leak surface HIGH
5 NEXT_PUBLIC_* env vars will be embedded client-side and can leak values HIGH
6 Optional AWS keys (APP_AWS_*) suggested in .env risk credential leakage HIGH
7 TRIGGER_SECRET_KEY placed in app env exposes workflow/runner secrets HIGH
8 Published ports (3000,3002) expose services if host firewall is misconfigured HIGH

Recommendations:

  1. Do not store real credentials in repository files. Remove or clearly mark example placeholders (e.g., user:pass) and ensure no real secrets are committed; add .env to .gitignore.
  2. Use a secrets manager (AWS Secrets Manager, HashiCorp Vault, GCP Secret Manager, Azure Key Vault) or Docker/compose secrets to inject secrets into containers instead of env_file/.env. Mount secrets as files with restrictive permissions where possible.
  3. Avoid placing sensitive values in NEXT_PUBLIC_* environment variables (these are exposed to client-side JS). Keep only non-sensitive public configuration in NEXT_PUBLIC_*.
  4. Minimize secret duplication. Use distinct, least-privilege credentials per service (e.g., separate Resend keys or scoped API keys) and rotate keys regularly.
  5. For database credentials: use a least-privilege DB user, enable SSL, restrict Postgres network access (VPC, IP allowlist, or private networking), and consider short-lived/rotated credentials where supported.
  6. When running locally or in production, limit port exposure: bind ports to localhost if you only need local access (127.0.0.1:3000:3000), use a reverse proxy with TLS for public traffic, and enforce host firewall/security group rules to restrict access.
  7. For CI/CD and automation secrets (e.g., TRIGGER_SECRET_KEY, AWS keys): store them in the CI/CD secret store or the provider’s secret manager and only inject at runtime. Do not put them in env_file checked into repo.
  8. Audit healthchecks and logging to ensure secrets are not accidentally logged. Ensure containers don’t echo env vars to logs or expose debug endpoints in production.
  9. Consider tooling to detect secrets in commits (pre-commit hooks, git-secrets, trufflehog) and automate secret rotation on suspected exposure.

🔴 apps/api/buildspec.yml (HIGH Risk)

# Issue Risk Level
1 Unverified remote install via curl bash (bun.sh) allows remote RCE
2 Dependency installs run package postinstall scripts (supply-chain RCE) HIGH
3 Unquoted environment vars used in shell commands — argument injection HIGH
4 Docker build can bake secrets into image layers or build cache HIGH
5 Cached node_modules and bun cache may persist malicious artifacts HIGH

Recommendations:

  1. Stop piping remote installers to a shell. Download the bun installer to a file over TLS, verify a checksum or signature (or vendor a pinned binary), and then run it. Prefer pinned versions and validate TLS certs; avoid automated curl | bash.
  2. Ensure dependency installation never runs arbitrary postinstall scripts: require frozen lockfile installs and use --ignore-scripts in CI. Remove the fallback that executes installs without script suppression. Add dependency supply-chain scanning (Snyk/Dependabot/OSS Index) and reproducible installs (npm ci / bun install --frozen-lockfile only).
  3. Quote and validate all environment variable expansions used in commands and Docker tags. Example: docker build -t "$ECR_REPOSITORY_URI:$IMAGE_TAG" ... Validate/whitelist variable formats (no spaces, no shell metacharacters) before using them in command arguments to prevent injection.
  4. Never bake secrets or long-lived credentials into images or copy them into the build context. Use BuildKit secrets (--secret) or Docker build-time secret features, .dockerignore to exclude sensitive files, and ensure CI env vars are not passed as build args that get persisted in image layers. Rotate any credentials accidentally embedded.
  5. Avoid caching entire node_modules in shared CI caches unless necessary. Clear caches on suspicion, scan cached artifacts for malicious packages, and prefer fresh installs from verified lockfiles. Limit who can push to CI cache and run periodic integrity scans of cached artifacts.

🟡 apps/api/src/attachments/attachments.service.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing server-side authorization (IDOR) for orgId-based methods MEDIUM
2 getAttachments returns signed URLs for all files (mass exposure) MEDIUM
3 Client-controlled Content-Type allows content-type spoofing/XSS MEDIUM
4 S3 metadata fields not all sanitized — risk of header injection MEDIUM
5 console.error logs may leak sensitive data or object keys MEDIUM
6 Long-lived AWS creds from env; possible over-privileged access MEDIUM
7 No malware/ MIME validation on uploaded file contents MEDIUM

Recommendations:

  1. Enforce server-side authorization in this service (or ensure controllers calling it verify the caller's org membership and permissions). Do not rely on caller-supplied organizationId without checking the caller identity/claims.
  2. Do not return signed URLs for every attachment by default. Provide metadata endpoints that do not include URLs, and generate presigned URLs on-demand with the shortest practical expiry. Consider per-request authorization checks before issuing a signed URL.
  3. Validate and normalize Content-Type server-side against an allowlist derived from file inspection (magic bytes / file signature) before storing the ContentType in S3. When serving files, set Content-Disposition: attachment; filename="..." to avoid inline execution in browsers when not intended.
  4. Sanitize all metadata fields written to S3 (organizationId, entityId, entityType, uploadedBy), removing/control-encoding CR/LF and non-ASCII as done for file names. Enforce a strict charset and pattern for IDs (e.g., allow only alphanumerics and a small set of separators).
  5. Avoid logging full error objects and unredacted S3 keys or environment values. Use structured logging with redaction for sensitive fields and ensure logs are access-controlled.
  6. Prefer IAM roles (e.g., EC2/EKS task roles, ECS task roles, or WebIdentity) instead of static credentials in env if running on AWS. Ensure credentials used have least privilege (only s3:GetObject/PutObject/DeleteObject on the specific bucket/prefix). Rotate keys and limit lifetime if static creds are necessary.
  7. Add server-side file scanning or integrate a malware scanning solution (e.g., ClamAV, commercial scanners, or AWS Malware Protection integrations). Verify uploaded file content against expected MIME/type using file-signature inspection, and reject suspicious files or dangerous extensions (e.g., .html/.svg served as text/html).
  8. Harden S3 bucket policies: block public access, enforce encryption-at-rest (SSE), use bucket policies to restrict access to the application's IAM role, and disable ACLs. Consider storing user-upload metadata in DB rather than S3 metadata if you need richer/longer fields.
  9. Shorten presigned URL expiry where feasible (e.g., a few minutes or less depending on use case) and monitor access to presigned URLs.
  10. When constructing the S3 key/path, validate/normalize organizationId/entityId/entityType to prevent path traversal confusion or unexpected prefix creation.

🟡 apps/api/src/comments/dto/update-comment.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 No HTML sanitization for comment content (XSS risk) MEDIUM

Recommendations:

  1. Sanitize comment content server-side before saving (e.g., sanitize-html or DOMPurify with JSDOM).
  2. Always encode/escape content on output according to the rendering context (HTML body, HTML attribute, JS, URL).
  3. If HTML is allowed, apply a strict whitelist of allowed tags/attributes and strip everything else; otherwise strip all HTML.
  4. Normalize input (trim, remove control characters) and keep the MaxLength validation (already present) intact.
  5. Consider storing both raw and sanitized versions only if you have a clear, safe usage pattern; prefer saving sanitized content to minimize risk.
  6. Enforce a Content Security Policy (CSP) as an additional defense-in-depth measure.
  7. Add automated tests that verify common XSS payloads are neutralized.

🟡 apps/api/src/context/context.controller.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing input validation on request bodies (create/update) MEDIUM
2 Missing validation/parsing on route param 'id' MEDIUM
3 Unsanitized inputs passed to service methods — potential SQL/NoSQL injection MEDIUM
4 Authenticated user PII (email/id) echoed in responses MEDIUM
5 Optional X-Organization-Id for API key may allow cross-org access MEDIUM

Recommendations:

  1. Enable class-validator DTOs and apply ValidationPipe globally
  2. Use ParseUUIDPipe/ParseIntPipe or explicit param validation for 'id'
  3. Sanitize inputs and use parameterized ORM queries to prevent injection
  4. Avoid returning user email/id or restrict PII in responses
  5. Enforce organization ownership checks and require org header for API-key calls

🔴 apps/api/src/context/context.service.ts (HIGH Risk)

# Issue Risk Level
1 No input validation for DTOs and params before DB operations HIGH
2 Reads then writes separately — TOCTOU race condition possible HIGH
3 Logs user content (question) to application logs HIGH
4 Logs raw error objects, may leak internal details HIGH

Recommendations:

  1. Add validation and sanitization for incoming IDs and DTOs (use class-validator on DTOs and enable validation pipes in NestJS controllers). Validate organizationId and id formats (e.g., UUID) before DB calls.
  2. Enforce scope in the DB operation itself: include organizationId in the where clause for update and delete (e.g. where: { id, organizationId }) so the DB enforces ownership in a single statement.
  3. Avoid read-then-write as separate operations when enforcing authorization. Perform update/delete with a single conditional DB call (including organizationId) or use a transaction/optimistic concurrency control to eliminate TOCTOU windows.
  4. Avoid logging raw user-provided content. Redact or hash sensitive fields (or log only metadata/ids). If truncation is necessary, ensure it cannot leak sensitive data.
  5. Sanitize error logs in production: do not log full error objects or stacks to user-accessible logs. Log structured, minimal error information and capture full traces in secure/internal error reporting systems (with access controls).

🟡 apps/api/src/context/dto/create-context.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 No max length on question and answer (DoS via large payloads) MEDIUM
2 No output sanitization indicated (stored XSS when rendered) MEDIUM
3 Tags lack limits on count and length (resource exhaustion) MEDIUM
4 No trimming/normalization of inputs (unexpected whitespace/injection) MEDIUM

Recommendations:

  1. Add length limits to question and answer, e.g. @maxlength(2000) (adjust numbers to app needs).
  2. Constrain tags: use @ArrayMaxSize(50) (or lower) and @maxlength(100, { each: true }) for each tag.
  3. Trim/normalize inputs at DTO level using class-transformer, e.g. @Transform(({ value }) => typeof value === 'string' ? value.trim() : value) on string fields.
  4. Sanitize or escape content when rendering to users (prevent stored XSS). If storing HTML, sanitize with a vetted library (e.g. DOMPurify on server or sanitize-html) before saving and/or escape on output.
  5. Enable and configure NestJS global validation pipe (transform: true, whitelist: true, forbidNonWhitelisted as needed) so DTOs are enforced consistently.
  6. Enforce request size limits (body-parser / Nest config, reverse proxy limits) and apply rate limiting (Nest Throttler or API gateway) to mitigate abuse/DoS from large/rapid submissions.
  7. Consider logging/monitoring for unusually large payloads or abnormally large tag arrays and add quotas per user if applicable.

🟡 apps/api/src/device-agent/device-agent.controller.ts (MEDIUM Risk)

# Issue Risk Level
1 Content-Disposition header injection via unsanitized filename MEDIUM
2 Filename and contentType from service used without validation MEDIUM
3 OrganizationId header forwarded to service without validation MEDIUM
4 employeeId default 'unknown-user' may misattribute or bypass checks MEDIUM
5 Potential injection in backend service via orgId/employeeId inputs MEDIUM

Recommendations:

  1. Sanitize filenames before placing them in headers: remove CR, LF, control characters and unescaped quotes. Prefer a library (e.g., npm content-disposition) to build safe Content-Disposition headers.
  2. Validate and canonicalize contentType against an allowlist of expected MIME types before setting the Content-Type header.
  3. Validate OrganizationId (e.g., require UUID format) and employeeId inputs at the controller boundary. Reject or normalize unexpected formats.
  4. Avoid using a permissive default like 'unknown-user' for employeeId; instead require explicit authenticated identity or return a 4xx if user identity is required for the operation.
  5. Ensure deviceAgentService performs its own input validation and does not directly use orgId/employeeId in OS commands, SQL queries, or templates without parameterization or escaping.
  6. Limit streamed file size and enforce server-side checks to prevent resource exhaustion.
  7. Add unit/integration tests that assert CRLF/newline characters in filenames are rejected or sanitized and that only allowed content types are returned.

🔴 apps/api/src/device-agent/device-agent.service.ts (HIGH Risk)

# Issue Risk Level
1 Unsanitized orgId/employeeId inserted into generated Windows script HIGH
2 Windows setup script marked executable; may run injected content HIGH
3 No input validation for organizationId/employeeId parameters HIGH
4 Service methods lack auth/authorization checks (can generate agent for any org) HIGH
5 AWS credentials loaded from environment; risk if env leaked or committed HIGH
6 S3 streams returned directly without size/throughput limits (DoS risk) HIGH

Recommendations:

  1. Validate and whitelist orgId and employeeId before embedding in scripts
  2. Escape or template-insert values to prevent command/script injection
  3. Enforce authorization so only allowed callers can request agents
  4. Avoid embedding untrusted data into executable scripts; remove exec bit or sign artifacts
  5. Use IAM roles, avoid long-lived env keys, rotate credentials and apply least privilege

🟡 apps/api/src/devices/devices.service.ts (MEDIUM Risk)

# Issue Risk Level
1 Missing validation for organizationId and memberId inputs MEDIUM
2 Service lacks authorization checks for accessing org/member devices MEDIUM
3 Rethrowing errors includes original error.message (info leak) MEDIUM
4 Detailed logging of organizationId/memberId may leak sensitive data MEDIUM

Recommendations:

  1. Perform input validation and sanitization for organizationId and memberId at the controller/service boundary (e.g., enforce UUID format, length checks, or use class-validator DTOs).
  2. Enforce authorization checks before returning devices: verify the calling principal has permission to access the organization or member resources (e.g., check currentUser.organizationId, roles/permissions, or policies).
  3. Avoid returning internal error.message to callers. Log detailed error information internally (with correlation id) and return a generic error message to clients (e.g., 'Failed to retrieve devices').
  4. Reduce sensitive logging: avoid logging raw organizationId/memberId in production logs or redact them; use non-sensitive identifiers or hashed IDs for traceability.
  5. Validate hostIds and responses from fleetService before passing them to downstream calls (ensure they are numeric array, handle unexpected shapes/empty values).
  6. Handle and classify external API errors (fleetService) separately; implement retries/backoff and fail-safe behavior so upstream failures don't leak internal details.

🟡 apps/api/src/devices/dto/device-responses.dto.ts (MEDIUM Risk)

# Issue Risk Level
1 Sensitive fields (hardware_serial, public_ip, computer_name) exposed without controls MEDIUM

Recommendations:

  1. Apply field-level access control: redact or omit sensitive fields (hardware_serial, public_ip, computer_name, etc.) for users/roles that do not need them.
  2. Enforce authorization checks at the controller/service layer before returning DeviceResponseDto objects (e.g., transform/serialize responses based on user permissions).
  3. Provide explicit DTO variants for different audiences (internal vs. external) or use class-transformer to exclude sensitive properties by default.
  4. Log and audit access to endpoints that return sensitive fields and add tests to ensure sensitive fields are not returned to unauthorized roles.
  5. Document data minimization requirements and ensure retention/transport protections (encryption in transit, TLS, and appropriate logging redaction).

🟢 apps/api/src/framework-editor/task-template/dto/task-template-response.dto.ts (LOW Risk)

# Issue Risk Level
1 Response exposes internal timestamps (createdAt/updatedAt) LOW

Recommendations:

  1. Remove or redact createdAt/updatedAt from the public response if they are internal metadata not needed by consumers.
  2. If timestamps must be returned, explicitly serialize them (e.g., to ISO strings) and document the fields' semantics and retention.
  3. Use class-transformer decorators (e.g., @expose, @exclude, @Transform) to control which properties are serialized and how dates are formatted.
  4. Audit endpoints to ensure only request DTOs receive class-validator decorators; add @IsString/@IsEnum/@isdate to incoming/request DTOs and enable NestJS ValidationPipe for runtime validation of client input.

💡 Recommendations

View 3 recommendation(s)
  1. Remediate OSV CVEs: upgrade ai to >=5.0.52 (fixes GHSA-rwvc-j5jr-mgvh) and upgrade xlsx off 0.18.5 to a non‑vulnerable release that addresses GHSA-4r6h-8v6p-xvw6 and GHSA-5pgg-2g8v-p4x9; run package-audit and test the app after upgrades.
  2. Remove the hardcoded credentials in SELF_HOSTING.md (the sample DATABASE_URL containing user:pass). Replace with an explicit placeholder (e.g., postgresql://:@... ) and ensure no real credentials remain in tracked files.
  3. Fix concrete injection points in code: validate/whitelist orgId and employeeId and escape/templating-encode them before embedding into generated scripts (apps/api/src/device-agent/device-agent.service.ts); sanitize filenames (strip CR/LF/control chars and quotes) before using them in Content-Disposition headers (apps/api/src/device-agent/device-agent.controller.ts); and parameterize DB updates/queries and include organizationId in the single DB WHERE clause to prevent SQL/NoSQL injection/TOCTOU (apps/api/src/context/context.service.ts).

Powered by Comp AI - AI that handles compliance for you. Reviewed Nov 17, 2025

* refactor(security-questionnaire): reorganize imports and update header text

* chore(security-questionnaire): enhance auto-answer button and add error handling for unanswered questions

---------

Co-authored-by: Mariano Fuentes <marfuen98@gmail.com>
@vercel vercel bot temporarily deployed to staging – portal November 17, 2025 23:06 Inactive
@Marfuen Marfuen merged commit 68f0401 into release Nov 17, 2025
10 of 11 checks passed
@claudfuen
Copy link
Contributor

🎉 This PR is included in version 1.59.0 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants