-
Notifications
You must be signed in to change notification settings - Fork 5
feat:(file-manager) fix download speeds #756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughThe PR migrates multi-file downloads from client-side ZIP generation using JSZip to server-side compression using the archiver library. A new API endpoint creates ZIP archives on the server, streams them to clients, and the client-side implementation is replaced with a simple server call. Changes
Sequence DiagramsequenceDiagram
actor Client
participant UI as File Manager UI
participant API as FileController
participant Service as FileService
participant DB as Database
participant FS as Filesystem
Client->>UI: Select files, click download
UI->>API: POST /api/files/download-zip<br/>(files with id & path)
API->>API: Validate authToken & input
loop For each file
API->>Service: getFileMetadataById(id, userId)
Service->>DB: Query metadata + access<br/>(owner/direct/folder)
DB-->>Service: Return metadata
Service-->>API: Return metadata (no data)
end
API->>FS: Create temp ZIP file
loop Archive each file
API->>Service: getFileDataStream(id, userId)
Service->>DB: Query file data column
DB-->>Service: Return Readable stream
Service-->>API: Return stream + metadata
API->>FS: Pipe stream to ZIP archive
end
API->>FS: Finalize ZIP
API->>Client: Stream ZIP response
Client->>UI: Browser native download
API->>FS: Delete temp ZIP (cleanup)
FS-->>API: Acknowledged
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
platforms/file-manager-api/package.json (1)
16-33:⚠️ Potential issue | 🟡 MinorUpgrade
@types/archiverto matcharchivermajor version.
archiver7.0.1 does not ship its own TypeScript types and requires@types/archiver@7.x. The current@types/archiver@6.0.3is incompatible with the archiver 7.x API surface and will cause type errors. Upgrade to@types/archiver@7.0.0or newer.
🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 674-684: The current sanitizePath implementation in sanitizePath
allows backslashes, drive letters, and doesn't properly resolve "."/".."
segments; update sanitizePath to first normalize separators (convert backslashes
to '/'), strip Windows drive or UNC prefixes and any leading slashes, then split
and iteratively resolve path segments ignoring empty and "." and properly
popping on ".." while preventing pops beyond root; finally rejoin with '/' and
if any attempt to escape root was detected, return an empty string or throw so
zip entries cannot write outside the target directory.
In `@platforms/file-manager/src/routes/`(protected)/files/+page.svelte:
- Around line 1173-1176: The downloadedFiles mapping currently sets size: 0
which shows incorrect sizes; update the map over allFilesToDownload (the place
constructing downloadedFiles) to use the actual file size (e.g., size: file.size
or the property name used on items in allFilesToDownload) instead of 0 so each
item (constructed from displayName and path) shows the correct size in the UI.
🧹 Nitpick comments (1)
platforms/file-manager-api/src/services/FileService.ts (1)
439-447: Avoid sequential N+1 metadata fetches for bulk downloads.
With a 500-file cap, the sequentialawaitloop can add noticeable latency. Consider batching (single query withIn(ids)+ access filtering) or controlled parallelism to reduce wall time.
platforms/file-manager/src/routes/(protected)/files/+page.svelte
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 612-626: The request body parsing currently builds fileEntries
from files or fileIds without validating ids; add a validation pass in the
FileController handler to ensure every entry has a non-empty string id before
calling FileService (e.g., before invoking getFilesMetadataByIds). If any id is
missing, empty, or not a string, return res.status(400).json({ error: "Invalid
file id in request" }). Implement this check after constructing fileEntries (or
validate while mapping) and reference the variables fileEntries, files, fileIds
and the downstream call getFilesMetadataByIds to locate where to insert the
guard.
- Around line 674-754: The filename from fileData.name is not sanitized and can
contain path separators or traversal segments leading to zip‑slip; inside the
loop over fileEntries, before building fullPath use a safe basename sanitizer
(e.g., a new sanitizeFilename function) to strip drive letters, convert
backslashes to forward slashes, remove any leading slashes, collapse/remove "."
and ".." segments, and remove or replace path separator characters and other
dangerous characters from fileData.name (refer to symbols: fileData.name,
baseName, sanitizePath, fullPath, usedPaths, archive.append); then use that
sanitized base name when composing fullPath, when updating usedPaths, and when
calling archive.append.
In `@platforms/file-manager/src/routes/`(protected)/files/+page.svelte:
- Around line 1193-1216: The code currently buffers the entire ZIP via await
response.blob(); replace this with streaming the response body to disk using
response.body.getReader() and a WritableStream from the File System Access API
(or StreamSaver.js fallback) so you do not load the whole archive into memory;
implement a pump that reads chunks from response.body.getReader(), writes them
to the writable (writable.write(chunk) or writer.write(chunk)), updates
downloadProgress.overallProgress and downloadProgress.currentFile as chunks
arrive, and finally closes the writer; reference response.blob(),
response.body.getReader(), downloadProgress, filesPayload and API_BASE_URL when
locating the change and ensure you handle the File System Access API
availability (Chromium-first) and fallback logic before writing.
platforms/file-manager/src/routes/(protected)/files/+page.svelte
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In @.gitignore:
- Around line 52-54: Add robust cleanup for the temp ZIP directory
(platforms/file-manager-api/temp/) by running cleanupOldZips() at service
startup, scheduling it periodically with setInterval() (e.g., hourly) to handle
periods without downloads, and replacing synchronous
fs.readdirSync()/fs.unlinkSync() calls inside cleanupOldZips() with non-blocking
async fs.promises APIs or parallelized async operations to avoid blocking the
event loop; also add simple disk-usage monitoring/alerting (e.g., check
directory size in the same periodic job and emit a warning/error via your
existing logger/metrics pipeline) so accumulation triggers an alert before
causing outages.
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 691-695: The ZIP filename generation (timestamp, zipFilename,
zipPath in FileController) can collide for concurrent requests because it only
includes seconds; modify the naming to append a per-request unique suffix (e.g.,
Date.now() milliseconds, process.pid, or a short random/UUID string) to the
timestamp so each zipFilename/zipPath is unique; update the code that builds
zipFilename to include this suffix (and ensure any consumers that reference
zipPath use the new name).
- Around line 696-731: The write stream created as output via
fs.createWriteStream(zipPath) must have an 'error' handler attached immediately
after creation to prevent unhandled 'error' events (attach before calling
archive.pipe(output)); add a synchronous output.on('error', ...) that cleans up
the archive/file and responds if possible. Likewise, ensure the read stream used
to send the file to the client (fileStream) has its own 'error' handler (at the
location referenced around line 878) to handle read errors, clean up the zip,
and avoid crashing the process; reference the output variable,
createWriteStream, archive.pipe(output), and fileStream when adding these
handlers.
🧹 Nitpick comments (3)
platforms/file-manager/src/routes/(protected)/files/+page.svelte (1)
1096-1106: Mirror the server-side 500-file cap on the client.
Pre-empt oversize selections so users get immediate feedback instead of a server error.🧩 Suggested guard
if (allFilesToDownload.length === 0) { toast.info("No files found in selected items"); clearSelection(); return; } + + const MAX_DOWNLOAD_FILES = 500; + if (allFilesToDownload.length > MAX_DOWNLOAD_FILES) { + toast.error(`You can download up to ${MAX_DOWNLOAD_FILES} files at once.`); + clearSelection(); + return; + }platforms/file-manager-api/src/controllers/FileController.ts (2)
34-57: Consider non-blocking cleanup to avoid event-loop stalls.
cleanupOldZipsuses sync fs calls and is invoked on every download; on large directories this can block the request thread. Suggest converting to asyncfs.promisesand awaiting it (or firing in the background).♻️ Possible async refactor
- private cleanupOldZips() { + private async cleanupOldZips() { try { const now = Date.now(); const TTL = 24 * 60 * 60 * 1000; // 24 hours - - const files = fs.readdirSync(this.ZIP_TEMP_DIR); + const files = await fs.promises.readdir(this.ZIP_TEMP_DIR); for (const file of files) { if (!file.startsWith('files-') || !file.endsWith('.zip')) continue; const filePath = path.join(this.ZIP_TEMP_DIR, file); - const stats = fs.statSync(filePath); + const stats = await fs.promises.stat(filePath); if (now - stats.mtimeMs > TTL) { - fs.unlinkSync(filePath); + await fs.promises.unlink(filePath); console.log(`Cleaned up old zip: ${file}`); } } } catch (error) { console.error('Error cleaning up old zips:', error); } }- this.cleanupOldZips(); + await this.cleanupOldZips();Also applies to: 639-640
873-887: Clean up generated ZIP after response completes.
Files remain on disk for up to 24h; for large downloads or high volume this can accumulate quickly. Consider deleting the zip after streaming finishes (with a safe fallback on errors).♻️ Possible cleanup hook
const fileStream = fs.createReadStream(zipPath); fileStream.pipe(res); + const cleanup = () => { + fs.unlink(zipPath, () => {}); + }; + res.on('finish', cleanup); + res.on('close', cleanup); fileStream.on('error', (err) => { console.error('Error streaming zip file:', err); if (!res.headersSent) { res.status(500).json({ error: 'Failed to send zip file' }); } });
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@platforms/file-manager-api/src/controllers/FileController.ts`:
- Around line 844-853: The zip file can be deleted while still being piped to
the response causing incomplete downloads; wrap the piping in a Promise that
resolves on the appropriate completion event (e.g., fileStream.on('end'/'close')
or res.on('finish')) and rejects on 'error', then await that Promise before
allowing the finally block to delete zipPath; update the logic around
fileStream.pipe(res) (and the similar block at lines ~876-883) to await the
stream completion Promise, handle stream errors to send a 500 if headers aren't
sent, and only remove zipPath after the Promise resolves.
- Around line 834-837: Attach terminal event handlers to the output stream
before calling archive.finalize() to avoid missing fast-close events;
specifically, move the promise that listens for output.on('close'/'error') (the
block referencing output and the Promise) so it is created and its handlers are
attached prior to calling archive.finalize(), or replace that manual listener
logic with Node's stream.finished(output, callback) (importing finished from
'stream' or using util.promisify) so the promise reliably resolves/rejects when
the output stream ends; ensure this change is applied around the output and
archive.finalize() usage in FileController (where output and archive.finalize()
are used).
xPathin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
Description of change
Backend (file-manager-api)
Frontend (file-manager)
Issue Number
closes #755
Type of change
How the change has been tested
Change checklist
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.