feat: implement compression app#180
feat: implement compression app#180mykhailokurochkin wants to merge 3 commits intomate-academy:masterfrom
Conversation
mateacademy-ai-mentor
left a comment
There was a problem hiding this comment.
Review Summary
Decision: REQUEST_CHANGES — there are critical blockers that prevent meeting the core requirements.
Why: While routing and zlib usage are on the right track, several implementation choices violate the streaming/upload requirements and will break functionality and tests.
Critical issues to fix (blockers):
- src/index.html
- Line 9: Missing
enctype="multipart/form-data"on the<form>. Without this, the browser won’t send a real file upload. Add:<form action="/compress" method="POST" enctype="multipart/form-data">. - Line 12: File input is named
inputbut must be exactlyfile. Change to:<input type="file" name="file" required />.
- Line 9: Missing
- src/createServer.js — request body handling and multipart parsing
- Lines 49–56 and 62–112: The server buffers the entire body and manually splits strings to parse multipart. This is fragile for binary data and violates the requirement to use Node.js streams end-to-end. Use a streaming multipart parser (e.g., Busboy) or stream-parse parts so you can obtain the file as a
Readableand immediately pipe it:fileStream -> zlibTransform -> res.
- Lines 49–56 and 62–112: The server buffers the entire body and manually splits strings to parse multipart. This is fragile for binary data and violates the requirement to use Node.js streams end-to-end. Use a streaming multipart parser (e.g., Busboy) or stream-parse parts so you can obtain the file as a
- src/createServer.js — synchronous disk write
- Lines 105–111:
fs.writeFileSync(savePath, fileBuffer)blocks the event loop and contradicts the streaming requirement. Avoid writing the upload to disk; pipe the incoming file stream directly into the compression transform. If temporary storage is unavoidable, use async streams and clean up.
- Lines 105–111:
- src/createServer.js — incorrect output filename extension
- Lines 137–155: The
Content-Dispositioncurrently appends.${compressionType}(e.g.,.gzip). Map to required extensions:gzip -> .gz,deflate -> .dfl,br -> .br, and quote the filename:attachment; filename="<original><ext>".
- Lines 137–155: The
- src/createServer.js — wrong fallback field name
- Lines 117–118: URL-encoded fallback reads
params.get('input'). The requirement mandates the field name isfile. Change toparams.get('file')and continue to return 400 for missing/invalid input.
- Lines 117–118: URL-encoded fallback reads
Minor improvements (non-blockers):
- Quote filenames in
Content-Dispositionto handle spaces safely. - Optional: In index.html, change the label to “File to compress:” for clarity.
You’ve nailed the zlib transform selection and route structure—great work. Address the streaming upload and form mismatches, and you should meet all requirements. If you’d like, I can outline a small Busboy-based snippet to replace the current multipart logic.
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Important
Fix the issues, commit the changes, and then re-request my review using the Re-request review button2 in the Reviewers tab.
Tip
If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.
Caution
Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.
Footnotes
src/createServer.js
Outdated
| const formContent = []; | ||
|
|
||
| for await (const chunk of req) { | ||
| formContent.push(chunk); | ||
| } | ||
|
|
||
| const bodyBuffer = Buffer.concat(formContent); | ||
| const contentType = req.headers['content-type'] || ''; |
There was a problem hiding this comment.
Lines 49-56: The server accumulates the entire request body into memory (collecting chunks into formContent then Buffer.concat). The task requires using Node.js streams to process the uploaded file by streaming it through a zlib transform into the response. Buffering the whole request prevents streaming and is fragile for large files. Consider using a streaming multipart parser (e.g., busboy) or piping the file part stream directly into zlib.
src/createServer.js
Outdated
| if (boundaryMatch) { | ||
| const boundary = `--${boundaryMatch[1]}`; | ||
| const bodyStr = bodyBuffer.toString('binary'); | ||
|
|
||
| const parts = bodyStr | ||
| .split(boundary) | ||
| .map((s) => s.trimStart()) | ||
| .filter((s) => s && s !== '--'); | ||
|
|
||
| let originalFilename; | ||
| let fileBuffer; | ||
|
|
||
| for (const part of parts) { | ||
| const headerEndIdx = part.indexOf('\r\n\r\n'); | ||
|
|
||
| if (headerEndIdx === -1) { | ||
| continue; | ||
| } | ||
|
|
||
| const headersStr = part.slice(0, headerEndIdx); | ||
| let contentStr = part.slice(headerEndIdx + 4); | ||
|
|
||
| if (contentStr.endsWith('\r\n')) { | ||
| contentStr = contentStr.slice(0, -2); | ||
| } | ||
|
|
||
| const cdLine = | ||
| headersStr | ||
| .split('\r\n') | ||
| .find((h) => /^content-disposition:/i.test(h)) || ''; | ||
| const nameMatch = cdLine.match(/name="?([^";]+)"?/i); | ||
| const filenameMatch = cdLine.match(/filename="?([^";]+)"?/i); | ||
|
|
||
| const partName = nameMatch ? nameMatch[1] : undefined; | ||
|
|
||
| if (filenameMatch) { | ||
| originalFilename = filenameMatch[1]; | ||
| fileBuffer = Buffer.from(contentStr, 'binary'); | ||
| } else if (partName === 'compressionType') { | ||
| compressionType = contentStr.trim(); | ||
| } | ||
| } | ||
|
|
||
| if (originalFilename && fileBuffer) { | ||
| const savePath = path.join(__dirname, originalFilename); | ||
|
|
||
| try { | ||
| fs.writeFileSync(savePath, fileBuffer); | ||
| input = originalFilename; | ||
| } catch (e) {} | ||
| } |
There was a problem hiding this comment.
Lines 62-112: Multipart parsing is implemented by converting the whole body to a binary string and splitting on the boundary. This approach is fragile for binary data (it treats binary as text) and can corrupt uploads. It also requires buffering the full body. Instead, parse multipart in a streaming way and obtain the file as a readable stream to pipe into the compression transform.
src/createServer.js
Outdated
| if (originalFilename && fileBuffer) { | ||
| const savePath = path.join(__dirname, originalFilename); | ||
|
|
||
| try { | ||
| fs.writeFileSync(savePath, fileBuffer); | ||
| input = originalFilename; | ||
| } catch (e) {} |
There was a problem hiding this comment.
Lines 105-111: The code writes the uploaded file to disk using fs.writeFileSync(savePath, fileBuffer). This is synchronous and blocks the event loop. The task expects streaming the upload into the compressor; avoid synchronous disk writes. If temporary storage is unavoidable, use async APIs and remove blocking calls.
src/createServer.js
Outdated
| input = params.get('input'); | ||
| compressionType = params.get('compressionType'); |
There was a problem hiding this comment.
Lines 117-118: For the URL-encoded fallback you read a field named 'input' (input = params.get('input')). The task requires the file input be named exactly 'file'. This mismatch will cause the server to treat the request as invalid. Update to expect 'file' and ensure the HTML form uses name="file".
src/createServer.js
Outdated
| switch (compressionType) { | ||
| case 'gzip': | ||
| zip = zlib.createGzip(); | ||
| break; | ||
| case 'deflate': | ||
| zip = zlib.createDeflate(); | ||
| break; | ||
| case 'br': | ||
| zip = zlib.createBrotliCompress(); | ||
| break; | ||
| } | ||
|
|
||
| res.statusCode = 200; | ||
| res.setHeader('Content-Type', 'application/octet-stream'); | ||
|
|
||
| res.setHeader( | ||
| 'Content-Disposition', | ||
| `attachment; filename=${path.basename(filePath)}.${compressionType}`, | ||
| ); |
There was a problem hiding this comment.
Lines 137-155: The switch selects the correct zlib stream for compression, which is good. However, the Content-Disposition header appends .${compressionType} to the filename (e.g., '.gzip'). The requirements require specific extensions: gzip -> '.gz', deflate -> '.dfl', br -> '.br'. Update the mapping to use those extensions and quote the filename in the header (e.g., filename="file.txt.gz").
src/createServer.js
Outdated
| pipeline(fileStream, zip, res, (err) => { | ||
| if (err) { | ||
| res.statusCode = 500; | ||
| res.setHeader('Content-Type', 'text/plain'); | ||
| res.end('Compression error'); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Lines 157-163: pipeline(fileStream, zip, res, (err) => { ... }) sends a 500 on error. This is acceptable for runtime compression errors. Ensure you do not attempt to write headers after the response has already been sent, and keep the form validation returning 400 for invalid inputs before reaching this pipeline stage.
src/index.html
Outdated
| <title>Compression App</title> | ||
| </head> | ||
| <body> | ||
| <form action="/compress" method="POST"> |
There was a problem hiding this comment.
Line 9: The
is missing the required encoding attribute for file uploads. Add enctype="multipart/form-data" to ensure the browser sends the file as multipart form data (required for file uploads). Without this the server will not receive the file stream correctly. See task requirements.
src/index.html
Outdated
| <form action="/compress" method="POST"> | ||
| <label> | ||
| Text to compress: | ||
| <input type="file" name="input" required /> |
There was a problem hiding this comment.
Line 12: The file input is named "input" but the task requires the file field be named exactly "file". Change to so the server and tests can find the uploaded file.
src/index.html
Outdated
| <body> | ||
| <form action="/compress" method="POST"> | ||
| <label> | ||
| Text to compress: |
There was a problem hiding this comment.
Line 11: The label text reads "Text to compress:" while the control is a file input. Consider changing the label to "File to compress:" for clarity (this is a usability suggestion, not required for tests).
src/index.html
Outdated
| <select name="compressionType" id="compressionType"> | ||
| <option value="gzip">gzip</option> | ||
| <option value="deflate">deflate</option> | ||
| <option value="br">br</option> | ||
| </select> |
There was a problem hiding this comment.
Lines 16-20: The select element is correctly named "compressionType" and includes the required options (gzip, deflate, br). This matches the task requirements — keep this as-is.
mateacademy-ai-mentor
left a comment
There was a problem hiding this comment.
Review Summary
Decision: REQUEST_CHANGES — there are a few critical blockers that will cause incorrect behavior or failing tests.
Why:
- The returned filename extension is incorrect in both the multipart and urlencoded branches, violating the requirement to map
gzip → .gz,deflate → .dfl,br → .br. This will break expected output and tests. - The
Content-Dispositionfilename is not quoted, which can break downloads for filenames with spaces/special characters and is part of the expected behavior. - Stream error handling is incomplete: if a pipeline error occurs after headers are sent, the response may hang or be truncated without closing the connection.
Critical issues to fix (with specifics and how):
-
Correct extension mapping (createServer.js):
- Location: lines where
Content-Dispositionis set in both multipart and urlencoded branches (currently using.${compressionType}). - Why: Spec requires exact extensions
.gz,.dfl,.br. - How: Add an explicit map and use it consistently:
const extMap = { gzip: '.gz', deflate: '.dfl', br: '.br' };const ext = extMap[compressionType];- Set header with:
attachment; filename="${path.basename(filename)}${ext}".
- Location: lines where
-
Quote the filename in Content-Disposition (createServer.js):
- Location: same header lines in both branches.
- Why: Unquoted filenames with spaces/special chars can break parsing; tests often assert quoted filenames.
- How: Use
attachment; filename="${path.basename(filename)}${ext}"(ensure quoting in both branches).
-
Robust stream error handling (createServer.js):
- Location: pipeline callbacks in both multipart and urlencoded branches.
- Why: If an error occurs after headers are sent, the client can hang or receive a partial file without the connection closing.
- How: In the
pipeline(..., (err) => { ... })callback, whenerrexists:- If
res.headersSent, callres.destroy(err)to close the connection; also considertransform.destroy(err)to propagate shutdown. - Optionally add
'error'listeners on the file/readable and transform streams to log and destroyres.
- If
Important security/behavior fix (recommended):
4. Remove or harden the application/x-www-form-urlencoded fallback (createServer.js):
- Location: urlencoded branch reading
fileviafs.createReadStream(path.join(__dirname, fileField)). - Why: This treats a client field as a server-side path, which is unsafe and not required by the task; tests expect multipart uploads.
- How: Remove this path-reading behavior, or strictly validate against a known-safe allowlist. Prefer supporting only multipart file uploads as specified.
Minor improvement (non-blocking):
- Instantiate Busboy with
new Busboy({ headers: req.headers })for compatibility across versions.
You’ve done solid work on streaming and zlib usage—once you fix the filename mapping/quoting and tighten error handling (and remove the urlencoded file-path fallback), the implementation should fully meet the requirements. Keep going—you’re very close!
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Important
Fix the issues, commit the changes, and then re-request my review using the Re-request review button2 in the Reviewers tab.
Tip
If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.
Caution
Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.
Footnotes
src/createServer.js
Outdated
| res.setHeader( | ||
| 'Content-Disposition', | ||
| `attachment; filename=${path.basename(filename)}.${compressionType}`, | ||
| ); |
There was a problem hiding this comment.
Incorrect output filename extension mapping. The code composes the filename as filename.${compressionType} (for example file.txt.gzip) in the multipart branch. The task requires explicit mapping: gzip -> .gz, deflate -> .dfl, br -> .br. Compute an ext value from compressionType and use that when setting Content-Disposition. Example: const ext = compressionType === 'gzip' ? '.gz' : compressionType === 'deflate' ? '.dfl' : '.br'; Then set attachment; filename="${path.basename(filename)}${ext}".
src/createServer.js
Outdated
| res.setHeader( | ||
| 'Content-Disposition', | ||
| `attachment; filename=${path.basename(fileField)}.${compressionType}`, | ||
| ); |
There was a problem hiding this comment.
Same filename extension issue in the urlencoded branch: the header uses .${compressionType}. Use the same explicit extension mapping and quoted filename here as in the multipart branch so the returned filename matches required .gz, .dfl, .br.
src/createServer.js
Outdated
| res.setHeader( | ||
| 'Content-Disposition', | ||
| `attachment; filename=${path.basename(filename)}.${compressionType}`, | ||
| ); |
There was a problem hiding this comment.
Content-Disposition value is not quoted. If a filename contains spaces or special characters, the header can break. Wrap the filename in quotes: attachment; filename=\"${path.basename(name)}${ext}\". Apply this change in both multipart and urlencoded branches.
src/createServer.js
Outdated
| pipeline(pass, transform, res, (err) => { | ||
| if (err && !res.headersSent) { | ||
| res.statusCode = 500; | ||
| res.setHeader('Content-Type', 'text/plain'); | ||
| res.end('Compression error'); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Pipeline error handling improvement. The current pipeline callback sets a 500 only if headers have not been sent. If an error happens after headers were already sent, the response may be left open or truncated. In the pipeline callback or via 'error' listeners on streams, ensure you res.destroy(err) or transform.destroy(err) to close the connection and avoid hanging clients. Consider also logging the error for debugging.
src/createServer.js
Outdated
| pipeline(read, transform, res, (err) => { | ||
| if (err && !res.headersSent) { | ||
| res.statusCode = 500; | ||
| res.setHeader('Content-Type', 'text/plain'); | ||
| res.end('Compression error'); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Same pipeline error handling issue in the urlencoded branch — ensure you destroy/end the response on stream errors so the client doesn’t hang or receive a partial file. Use res.destroy(err) or similar cleanup in the pipeline callback when err exists.
src/createServer.js
Outdated
| let filename = null; | ||
| let started = false; | ||
|
|
||
| const busboy = Busboy({ headers: req.headers }); |
There was a problem hiding this comment.
Busboy instantiation style: you use const busboy = Busboy({ headers: req.headers });. It is more typical and safer across versions to call new Busboy({ headers: req.headers }). Consider changing to the new form to match common examples.
src/createServer.js
Outdated
|
|
||
| const filePath = path.join(__dirname, fileField); | ||
| const read = fs.createReadStream(filePath); |
There was a problem hiding this comment.
The urlencoded fallback treats the posted file field as a server-side file path and does fs.createReadStream(path.join(__dirname, fileField)). This is unnecessary for the upload flow required by the task and could expose server files. Recommend removing this fallback or validating it strictly. Tests generally expect multipart uploads; relying on a posted filename that maps to a disk file is unexpected.
src/index.js
Outdated
| console.log('Server started! 🚀'); | ||
| console.log('Available at http://localhost:5700'); | ||
| }); | ||
| const server = createServer(); |
There was a problem hiding this comment.
The createServer() call is correct and required — it constructs the server instance returned to index.js. No change needed here unless you want to pass configuration or perform additional setup before listening.
src/index.js
Outdated
| }); | ||
| const server = createServer(); | ||
|
|
||
| server.listen(PORT); |
There was a problem hiding this comment.
Optional: add a callback or logging when calling server.listen(PORT) to confirm the server started, for example server.listen(PORT, () => console.log(Listening on ${PORT})). This is useful during development but not required by the task.
mateacademy-ai-mentor
left a comment
There was a problem hiding this comment.
Review Summary
Educational approval: You’ve got the core flow working — nice job using Node streams, zlib transforms, and multiparty to stream compression without buffering the whole file. That’s a solid foundation.
Critical issues to address next:
- src/createServer.js (SUPPORTED_TYPES): The extensions don’t match the spec. Requirement says gzip →
.gz, deflate →.dfl, br →.br. Using.gzip/.deflatewill fail filename checks; update the map accordingly. - src/createServer.js (GET / and /index.html): You set
Content-Type: text/htmlbut return an empty body. The requirement expects an HTML upload form. Streampublic/index.htmlor let your static file handler serve it by removing the special-case. - src/createServer.js (POST /compress header validation): Explicitly check
req.headers['content-type']starts withmultipart/form-data; otherwise return 400. This makes behavior predictable per the checklist. - src/createServer.js (originalFilename validation): Before using
path.basename(file.originalFilename), verifyfile.originalFilenameis present; return 400 if missing to satisfy the checklist. - src/createServer.js (Content-Disposition): Quote the filename and use the corrected extension:
attachment; filename="${compressedFilename}"to handle spaces/special characters and match tests. - src/createServer.js (pipeline error handling): Use 500 for unexpected compression errors, ensure temp file cleanup always runs, and if headers are already sent on error use
res.destroy(err)to avoid hanging connections. Also add'error'listeners onsourceandcompress, and renameerro→errfor clarity.
Minor polish: consistent status codes/messages and optional multiparty instantiation with { headers: req.headers }. You’re very close — tighten these details and you should pass the tests. Keep going; the streaming/compression pieces are well-done!
✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨
Footnotes
| const SUPPORTED_TYPES = { | ||
| gzip: { ext: '.gzip', method: zlib.createGzip }, | ||
| deflate: { ext: '.deflate', method: zlib.createDeflate }, | ||
| br: { ext: '.br', method: zlib.createBrotliCompress }, | ||
| }; |
There was a problem hiding this comment.
Incorrect extension strings — must match requirements
SUPPORTED_TYPES uses .gzip and .deflate but the task requires exact extensions .gz and .dfl (and .br for brotli). This will produce incorrect response filenames and fail tests. Update the mapping to exactly:
gzip: { ext: '.gz', method: zlib.createGzip },
deflate: { ext: '.dfl', method: zlib.createDeflate },
br: { ext: '.br', method: zlib.createBrotliCompress }(Requirement: compression mapping: gzip → .gz, deflate → .dfl, br → .br.)
| if ( | ||
| req.method === 'GET' && | ||
| (url.pathname === '/' || url.pathname === '/index.html') | ||
| ) { | ||
| res.statusCode = 200; | ||
| res.setHeader('Content-Type', 'text/html'); | ||
|
|
||
| res.end(); | ||
|
|
||
| return; |
There was a problem hiding this comment.
GET / and /index.html returns an empty page — serve the HTML form
You set Content-Type: text/html and immediately res.end() with no body. The task requires an HTML form for uploading. Either stream public/index.html here or remove this special-case so the static-serving code later serves index.html. As-is, GET / returns an empty body which does not meet the requirement.
| const form = new multiparty.Form(); | ||
|
|
||
| form.parse(req, (err, fields, files) => { |
There was a problem hiding this comment.
No explicit Content-Type validation for multipart/form-data
The requirements mandate returning 400 when POST /compress is not sent as multipart/form-data. Currently you call form.parse(req, ...) and rely on multiparty to fail. Add an explicit check on req.headers['content-type'] (e.g., startsWith 'multipart/form-data') and return 400 early if it doesn't match, so behavior is explicit and predictable.
| const originalFilename = path.basename(file.originalFilename); | ||
| const compressedFilename = originalFilename + compression.ext; |
There was a problem hiding this comment.
Missing validation of originalFilename before using it
You call path.basename(file.originalFilename) without checking that file.originalFilename exists. The checklist requires treating missing original filename as invalid form and responding with 400. Add an explicit check and return 400 if file.originalFilename is falsy before constructing compressedFilename.
| res.writeHead(200, { | ||
| 'Content-Type': 'application/octet-stream', | ||
| 'Content-Disposition': `attachment; filename=${compressedFilename}`, |
There was a problem hiding this comment.
Content-Disposition filename must be quoted and use correct extension
The header is set as attachment; filename=${compressedFilename} (unquoted). Filenames containing spaces or special characters require quoting and tests commonly expect a quoted filename. Also ensure compressedFilename uses the corrected extension from SUPPORTED_TYPES. Example:
'Content-Disposition': `attachment; filename="${compressedFilename}"`Apply this change when you correct the extension mapping.
| pipeline(source, compress, res, (erro) => { | ||
| fs.unlink(file.path, () => {}); | ||
|
|
||
| if (erro) { | ||
| res.writeHead(400); | ||
| res.end('Compression error'); | ||
| } | ||
| }); |
There was a problem hiding this comment.
Pipeline error handling is fragile — close response when headers already sent; use 500 for server errors
In the pipeline callback you attempt to send a 400 and res.end on error. If headers were already sent this will fail and may leave the client hanging. Recommendations:
- Use
if (res.headersSent) { return res.destroy(err); }to close the connection when streaming has started. - Use HTTP 500 for unexpected compression/server errors rather than 400.
- Ensure the temporary uploaded file is unlinked in all cases (success and error).
- Add
'error'listeners onsourceandcompressto destroy response and streams on error. - Rename
errotoerrfor clarity.
Example pattern:
pipeline(source, compress, res, (err) => {
fs.unlink(file.path, () => {});
if (err) {
if (res.headersSent) {
res.destroy(err);
} else {
res.writeHead(500);
res.end('Compression error');
}
}
});This ensures the connection does not hang and errors are signaled with appropriate status codes.
No description provided.