Skip to content

Conversation

@coodos
Copy link
Contributor

@coodos coodos commented Jan 30, 2026

Description of change

Backend (file-manager-api)

  1. Added archiver dependency to package.json for streaming zip generation
  2. New service methods in FileService.ts:
  • getFileMetadataById() - get file info without loading data blob
  • getFileDataStream() - stream file data as a Readable
  • getFilesMetadataByIds() - batch validate file access
  1. New endpoint POST /api/files/download-zip:
  • Accepts { files: [{ id, path }] } or { fileIds: [...] }
  • Uses archiver with store: true (no compression) for minimal CPU
  • Streams files one-at-a-time from DB → archive → response
  • Preserves folder structure via path parameter
  • Handles duplicate filenames automatically

Frontend (file-manager)

  1. Removed jszip dependency (no longer needed)
  2. Simplified download logic:
  • Still scans folders to gather file IDs + paths
  • Sends single POST to /api/files/download-zip with all files
  • Server creates zip and streams it back
  • Browser receives streaming download

Issue Number

closes #755

Type of change

  • Fix (a change which fixes an issue)
  • Docs (changes to the documentation)

How the change has been tested

Change checklist

  • I have ensured that the CI Checks pass locally
  • I have removed any unnecessary logic
  • My code is well documented
  • I have signed my commits
  • My code follows the pattern of the application
  • I have self reviewed my code

Summary by CodeRabbit

  • New Features
    • Server-side ZIP download: Download multiple files and folders as a single ZIP archive with compression handled securely on the server for improved performance and reliability.
    • Simplified download workflow: Enhanced multi-file download experience with streamlined UI and improved stability.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 30, 2026

Warning

Rate limit exceeded

@coodos has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 9 minutes and 14 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

The PR migrates multi-file downloads from client-side ZIP generation using JSZip to server-side compression using the archiver library. A new API endpoint creates ZIP archives on the server, streams them to clients, and the client-side implementation is replaced with a simple server call.

Changes

Cohort / File(s) Summary
Dependencies
platforms/file-manager-api/package.json, platforms/file-manager/package.json
Added archiver ^7.0.1 and @types/archiver ^6.0.3 to file-manager-api; removed jszip from file-manager.
Server-side API
platforms/file-manager-api/src/controllers/FileController.ts, platforms/file-manager-api/src/index.ts
New downloadFilesAsZip endpoint creates ZIP archives with authentication/authorization checks, supports two input formats, enforces 500-file limit, handles filename conflicts, and streams with cleanup.
Service Layer
platforms/file-manager-api/src/services/FileService.ts
Added three methods: getFileMetadataById, getFileDataStream, getFilesMetadataByIds for file metadata and stream retrieval with access validation.
Client-side UI
platforms/file-manager/src/routes/(protected)/files/+page.svelte
Replaced client-side JSZip-based multi-file download with server-side POST request; removed progress tracking, modal UI, and per-file orchestration logic.

Sequence Diagram

sequenceDiagram
    actor Client
    participant UI as File Manager UI
    participant API as FileController
    participant Service as FileService
    participant DB as Database
    participant FS as Filesystem

    Client->>UI: Select files, click download
    UI->>API: POST /api/files/download-zip<br/>(files with id & path)
    
    API->>API: Validate authToken & input
    
    loop For each file
        API->>Service: getFileMetadataById(id, userId)
        Service->>DB: Query metadata + access<br/>(owner/direct/folder)
        DB-->>Service: Return metadata
        Service-->>API: Return metadata (no data)
    end
    
    API->>FS: Create temp ZIP file
    
    loop Archive each file
        API->>Service: getFileDataStream(id, userId)
        Service->>DB: Query file data column
        DB-->>Service: Return Readable stream
        Service-->>API: Return stream + metadata
        API->>FS: Pipe stream to ZIP archive
    end
    
    API->>FS: Finalize ZIP
    API->>Client: Stream ZIP response
    Client->>UI: Browser native download
    
    API->>FS: Delete temp ZIP (cleanup)
    FS-->>API: Acknowledged
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 No more client-side huffing and puffing,
Files zip on the server, no CPU buffing!
Stream down the bunny hole, neat and so fast,
Server compression at last, at last! 📦✨

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: moving from client-side to server-side ZIP generation to improve download speeds, which is the primary objective of this PR.
Description check ✅ Passed The description covers backend and frontend changes, references the linked issue (#755), specifies type of change (Fix), and includes a completed checklist meeting most template requirements.
Linked Issues check ✅ Passed The PR fully addresses issue #755 by implementing server-side ZIP generation and streaming, eliminating client-side archiving, and reducing memory/CPU usage to improve download speeds.
Out of Scope Changes check ✅ Passed All changes are directly aligned with the linked issue requirements: backend ZIP streaming, frontend simplification, dependency updates, and no unrelated modifications detected.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/fix-download-speeds

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
platforms/file-manager-api/package.json (1)

16-33: ⚠️ Potential issue | 🟡 Minor

Upgrade @types/archiver to match archiver major version.
archiver 7.0.1 does not ship its own TypeScript types and requires @types/archiver@7.x. The current @types/archiver@6.0.3 is incompatible with the archiver 7.x API surface and will cause type errors. Upgrade to @types/archiver@7.0.0 or newer.

🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 674-684: The current sanitizePath implementation in sanitizePath
allows backslashes, drive letters, and doesn't properly resolve "."/".."
segments; update sanitizePath to first normalize separators (convert backslashes
to '/'), strip Windows drive or UNC prefixes and any leading slashes, then split
and iteratively resolve path segments ignoring empty and "." and properly
popping on ".." while preventing pops beyond root; finally rejoin with '/' and
if any attempt to escape root was detected, return an empty string or throw so
zip entries cannot write outside the target directory.

In `@platforms/file-manager/src/routes/`(protected)/files/+page.svelte:
- Around line 1173-1176: The downloadedFiles mapping currently sets size: 0
which shows incorrect sizes; update the map over allFilesToDownload (the place
constructing downloadedFiles) to use the actual file size (e.g., size: file.size
or the property name used on items in allFilesToDownload) instead of 0 so each
item (constructed from displayName and path) shows the correct size in the UI.
🧹 Nitpick comments (1)
platforms/file-manager-api/src/services/FileService.ts (1)

439-447: Avoid sequential N+1 metadata fetches for bulk downloads.
With a 500-file cap, the sequential await loop can add noticeable latency. Consider batching (single query with In(ids) + access filtering) or controlled parallelism to reduce wall time.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 612-626: The request body parsing currently builds fileEntries
from files or fileIds without validating ids; add a validation pass in the
FileController handler to ensure every entry has a non-empty string id before
calling FileService (e.g., before invoking getFilesMetadataByIds). If any id is
missing, empty, or not a string, return res.status(400).json({ error: "Invalid
file id in request" }). Implement this check after constructing fileEntries (or
validate while mapping) and reference the variables fileEntries, files, fileIds
and the downstream call getFilesMetadataByIds to locate where to insert the
guard.
- Around line 674-754: The filename from fileData.name is not sanitized and can
contain path separators or traversal segments leading to zip‑slip; inside the
loop over fileEntries, before building fullPath use a safe basename sanitizer
(e.g., a new sanitizeFilename function) to strip drive letters, convert
backslashes to forward slashes, remove any leading slashes, collapse/remove "."
and ".." segments, and remove or replace path separator characters and other
dangerous characters from fileData.name (refer to symbols: fileData.name,
baseName, sanitizePath, fullPath, usedPaths, archive.append); then use that
sanitized base name when composing fullPath, when updating usedPaths, and when
calling archive.append.

In `@platforms/file-manager/src/routes/`(protected)/files/+page.svelte:
- Around line 1193-1216: The code currently buffers the entire ZIP via await
response.blob(); replace this with streaming the response body to disk using
response.body.getReader() and a WritableStream from the File System Access API
(or StreamSaver.js fallback) so you do not load the whole archive into memory;
implement a pump that reads chunks from response.body.getReader(), writes them
to the writable (writable.write(chunk) or writer.write(chunk)), updates
downloadProgress.overallProgress and downloadProgress.currentFile as chunks
arrive, and finally closes the writer; reference response.blob(),
response.body.getReader(), downloadProgress, filesPayload and API_BASE_URL when
locating the change and ensure you handle the File System Access API
availability (Chromium-first) and fallback logic before writing.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 52-54: Add robust cleanup for the temp ZIP directory
(platforms/file-manager-api/temp/) by running cleanupOldZips() at service
startup, scheduling it periodically with setInterval() (e.g., hourly) to handle
periods without downloads, and replacing synchronous
fs.readdirSync()/fs.unlinkSync() calls inside cleanupOldZips() with non-blocking
async fs.promises APIs or parallelized async operations to avoid blocking the
event loop; also add simple disk-usage monitoring/alerting (e.g., check
directory size in the same periodic job and emit a warning/error via your
existing logger/metrics pipeline) so accumulation triggers an alert before
causing outages.

In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 691-695: The ZIP filename generation (timestamp, zipFilename,
zipPath in FileController) can collide for concurrent requests because it only
includes seconds; modify the naming to append a per-request unique suffix (e.g.,
Date.now() milliseconds, process.pid, or a short random/UUID string) to the
timestamp so each zipFilename/zipPath is unique; update the code that builds
zipFilename to include this suffix (and ensure any consumers that reference
zipPath use the new name).
- Around line 696-731: The write stream created as output via
fs.createWriteStream(zipPath) must have an 'error' handler attached immediately
after creation to prevent unhandled 'error' events (attach before calling
archive.pipe(output)); add a synchronous output.on('error', ...) that cleans up
the archive/file and responds if possible. Likewise, ensure the read stream used
to send the file to the client (fileStream) has its own 'error' handler (at the
location referenced around line 878) to handle read errors, clean up the zip,
and avoid crashing the process; reference the output variable,
createWriteStream, archive.pipe(output), and fileStream when adding these
handlers.
🧹 Nitpick comments (3)
platforms/file-manager/src/routes/(protected)/files/+page.svelte (1)

1096-1106: Mirror the server-side 500-file cap on the client.
Pre-empt oversize selections so users get immediate feedback instead of a server error.

🧩 Suggested guard
             if (allFilesToDownload.length === 0) {
                 toast.info("No files found in selected items");
                 clearSelection();
                 return;
             }
+
+            const MAX_DOWNLOAD_FILES = 500;
+            if (allFilesToDownload.length > MAX_DOWNLOAD_FILES) {
+                toast.error(`You can download up to ${MAX_DOWNLOAD_FILES} files at once.`);
+                clearSelection();
+                return;
+            }
platforms/file-manager-api/src/controllers/FileController.ts (2)

34-57: Consider non-blocking cleanup to avoid event-loop stalls.
cleanupOldZips uses sync fs calls and is invoked on every download; on large directories this can block the request thread. Suggest converting to async fs.promises and awaiting it (or firing in the background).

♻️ Possible async refactor
-    private cleanupOldZips() {
+    private async cleanupOldZips() {
         try {
             const now = Date.now();
             const TTL = 24 * 60 * 60 * 1000; // 24 hours
-
-            const files = fs.readdirSync(this.ZIP_TEMP_DIR);
+            const files = await fs.promises.readdir(this.ZIP_TEMP_DIR);
             for (const file of files) {
                 if (!file.startsWith('files-') || !file.endsWith('.zip')) continue;
                 
                 const filePath = path.join(this.ZIP_TEMP_DIR, file);
-                const stats = fs.statSync(filePath);
+                const stats = await fs.promises.stat(filePath);
                 
                 if (now - stats.mtimeMs > TTL) {
-                    fs.unlinkSync(filePath);
+                    await fs.promises.unlink(filePath);
                     console.log(`Cleaned up old zip: ${file}`);
                 }
             }
         } catch (error) {
             console.error('Error cleaning up old zips:', error);
         }
     }
-            this.cleanupOldZips();
+            await this.cleanupOldZips();

Also applies to: 639-640


873-887: Clean up generated ZIP after response completes.
Files remain on disk for up to 24h; for large downloads or high volume this can accumulate quickly. Consider deleting the zip after streaming finishes (with a safe fallback on errors).

♻️ Possible cleanup hook
                 const fileStream = fs.createReadStream(zipPath);
                 fileStream.pipe(res);
+                const cleanup = () => {
+                    fs.unlink(zipPath, () => {});
+                };
+                res.on('finish', cleanup);
+                res.on('close', cleanup);
                 
                 fileStream.on('error', (err) => {
                     console.error('Error streaming zip file:', err);
                     if (!res.headersSent) {
                         res.status(500).json({ error: 'Failed to send zip file' });
                     }
                 });

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 844-853: The zip file can be deleted while still being piped to
the response causing incomplete downloads; wrap the piping in a Promise that
resolves on the appropriate completion event (e.g., fileStream.on('end'/'close')
or res.on('finish')) and rejects on 'error', then await that Promise before
allowing the finally block to delete zipPath; update the logic around
fileStream.pipe(res) (and the similar block at lines ~876-883) to await the
stream completion Promise, handle stream errors to send a 500 if headers aren't
sent, and only remove zipPath after the Promise resolves.
- Around line 834-837: Attach terminal event handlers to the output stream
before calling archive.finalize() to avoid missing fast-close events;
specifically, move the promise that listens for output.on('close'/'error') (the
block referencing output and the Promise) so it is created and its handlers are
attached prior to calling archive.finalize(), or replace that manual listener
logic with Node's stream.finished(output, callback) (importing finished from
'stream' or using util.promisify) so the promise reliably resolves/rejects when
the output stream ends; ensure this change is applied around the output and
archive.finalize() usage in FileController (where output and archive.finalize()
are used).

Copy link
Contributor

@xPathin xPathin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🚀

@coodos coodos merged commit 3641500 into main Jan 30, 2026
4 checks passed
@coodos coodos deleted the feat/fix-download-speeds branch January 30, 2026 17:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[feature] File Manager, Server side compression & Streaming

3 participants