Skip to content

feat: implement compression app#180

Open
mykhailokurochkin wants to merge 3 commits intomate-academy:masterfrom
mykhailokurochkin:develop
Open

feat: implement compression app#180
mykhailokurochkin wants to merge 3 commits intomate-academy:masterfrom
mykhailokurochkin:develop

Conversation

@mykhailokurochkin
Copy link

No description provided.

Copy link

@mateacademy-ai-mentor mateacademy-ai-mentor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

Decision: REQUEST_CHANGES — there are critical blockers that prevent meeting the core requirements.

Why: While routing and zlib usage are on the right track, several implementation choices violate the streaming/upload requirements and will break functionality and tests.

Critical issues to fix (blockers):

  1. src/index.html
    • Line 9: Missing enctype="multipart/form-data" on the <form>. Without this, the browser won’t send a real file upload. Add: <form action="/compress" method="POST" enctype="multipart/form-data">.
    • Line 12: File input is named input but must be exactly file. Change to: <input type="file" name="file" required />.
  2. src/createServer.js — request body handling and multipart parsing
    • Lines 49–56 and 62–112: The server buffers the entire body and manually splits strings to parse multipart. This is fragile for binary data and violates the requirement to use Node.js streams end-to-end. Use a streaming multipart parser (e.g., Busboy) or stream-parse parts so you can obtain the file as a Readable and immediately pipe it: fileStream -> zlibTransform -> res.
  3. src/createServer.js — synchronous disk write
    • Lines 105–111: fs.writeFileSync(savePath, fileBuffer) blocks the event loop and contradicts the streaming requirement. Avoid writing the upload to disk; pipe the incoming file stream directly into the compression transform. If temporary storage is unavoidable, use async streams and clean up.
  4. src/createServer.js — incorrect output filename extension
    • Lines 137–155: The Content-Disposition currently appends .${compressionType} (e.g., .gzip). Map to required extensions: gzip -> .gz, deflate -> .dfl, br -> .br, and quote the filename: attachment; filename="<original><ext>".
  5. src/createServer.js — wrong fallback field name
    • Lines 117–118: URL-encoded fallback reads params.get('input'). The requirement mandates the field name is file. Change to params.get('file') and continue to return 400 for missing/invalid input.

Minor improvements (non-blockers):

  • Quote filenames in Content-Disposition to handle spaces safely.
  • Optional: In index.html, change the label to “File to compress:” for clarity.

You’ve nailed the zlib transform selection and route structure—great work. Address the streaming upload and form mismatches, and you should meet all requirements. If you’d like, I can outline a small Busboy-based snippet to replace the current multipart logic.


✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨

Important

Fix the issues, commit the changes, and then re-request my review using the Re-request review button2 in the Reviewers tab.

Tip

If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.

Caution

Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.

Footnotes

  1. Rate AI review example

  2. Re-request AI review example

  3. Add AI review comments example

Comment on lines 49 to 56
const formContent = [];

for await (const chunk of req) {
formContent.push(chunk);
}

const bodyBuffer = Buffer.concat(formContent);
const contentType = req.headers['content-type'] || '';

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 49-56: The server accumulates the entire request body into memory (collecting chunks into formContent then Buffer.concat). The task requires using Node.js streams to process the uploaded file by streaming it through a zlib transform into the response. Buffering the whole request prevents streaming and is fragile for large files. Consider using a streaming multipart parser (e.g., busboy) or piping the file part stream directly into zlib.

Comment on lines 62 to 112
if (boundaryMatch) {
const boundary = `--${boundaryMatch[1]}`;
const bodyStr = bodyBuffer.toString('binary');

const parts = bodyStr
.split(boundary)
.map((s) => s.trimStart())
.filter((s) => s && s !== '--');

let originalFilename;
let fileBuffer;

for (const part of parts) {
const headerEndIdx = part.indexOf('\r\n\r\n');

if (headerEndIdx === -1) {
continue;
}

const headersStr = part.slice(0, headerEndIdx);
let contentStr = part.slice(headerEndIdx + 4);

if (contentStr.endsWith('\r\n')) {
contentStr = contentStr.slice(0, -2);
}

const cdLine =
headersStr
.split('\r\n')
.find((h) => /^content-disposition:/i.test(h)) || '';
const nameMatch = cdLine.match(/name="?([^";]+)"?/i);
const filenameMatch = cdLine.match(/filename="?([^";]+)"?/i);

const partName = nameMatch ? nameMatch[1] : undefined;

if (filenameMatch) {
originalFilename = filenameMatch[1];
fileBuffer = Buffer.from(contentStr, 'binary');
} else if (partName === 'compressionType') {
compressionType = contentStr.trim();
}
}

if (originalFilename && fileBuffer) {
const savePath = path.join(__dirname, originalFilename);

try {
fs.writeFileSync(savePath, fileBuffer);
input = originalFilename;
} catch (e) {}
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 62-112: Multipart parsing is implemented by converting the whole body to a binary string and splitting on the boundary. This approach is fragile for binary data (it treats binary as text) and can corrupt uploads. It also requires buffering the full body. Instead, parse multipart in a streaming way and obtain the file as a readable stream to pipe into the compression transform.

Comment on lines 105 to 111
if (originalFilename && fileBuffer) {
const savePath = path.join(__dirname, originalFilename);

try {
fs.writeFileSync(savePath, fileBuffer);
input = originalFilename;
} catch (e) {}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 105-111: The code writes the uploaded file to disk using fs.writeFileSync(savePath, fileBuffer). This is synchronous and blocks the event loop. The task expects streaming the upload into the compressor; avoid synchronous disk writes. If temporary storage is unavoidable, use async APIs and remove blocking calls.

Comment on lines 117 to 118
input = params.get('input');
compressionType = params.get('compressionType');

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 117-118: For the URL-encoded fallback you read a field named 'input' (input = params.get('input')). The task requires the file input be named exactly 'file'. This mismatch will cause the server to treat the request as invalid. Update to expect 'file' and ensure the HTML form uses name="file".

Comment on lines 137 to 155
switch (compressionType) {
case 'gzip':
zip = zlib.createGzip();
break;
case 'deflate':
zip = zlib.createDeflate();
break;
case 'br':
zip = zlib.createBrotliCompress();
break;
}

res.statusCode = 200;
res.setHeader('Content-Type', 'application/octet-stream');

res.setHeader(
'Content-Disposition',
`attachment; filename=${path.basename(filePath)}.${compressionType}`,
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 137-155: The switch selects the correct zlib stream for compression, which is good. However, the Content-Disposition header appends .${compressionType} to the filename (e.g., '.gzip'). The requirements require specific extensions: gzip -> '.gz', deflate -> '.dfl', br -> '.br'. Update the mapping to use those extensions and quote the filename in the header (e.g., filename="file.txt.gz").

Comment on lines 157 to 163
pipeline(fileStream, zip, res, (err) => {
if (err) {
res.statusCode = 500;
res.setHeader('Content-Type', 'text/plain');
res.end('Compression error');
}
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 157-163: pipeline(fileStream, zip, res, (err) => { ... }) sends a 500 on error. This is acceptable for runtime compression errors. Ensure you do not attempt to write headers after the response has already been sent, and keep the form validation returning 400 for invalid inputs before reaching this pipeline stage.

src/index.html Outdated
<title>Compression App</title>
</head>
<body>
<form action="/compress" method="POST">

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 9: The

is missing the required encoding attribute for file uploads. Add enctype="multipart/form-data" to ensure the browser sends the file as multipart form data (required for file uploads). Without this the server will not receive the file stream correctly. See task requirements.

src/index.html Outdated
<form action="/compress" method="POST">
<label>
Text to compress:
<input type="file" name="input" required />

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 12: The file input is named "input" but the task requires the file field be named exactly "file". Change to so the server and tests can find the uploaded file.

src/index.html Outdated
<body>
<form action="/compress" method="POST">
<label>
Text to compress:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 11: The label text reads "Text to compress:" while the control is a file input. Consider changing the label to "File to compress:" for clarity (this is a usability suggestion, not required for tests).

src/index.html Outdated
Comment on lines 16 to 20
<select name="compressionType" id="compressionType">
<option value="gzip">gzip</option>
<option value="deflate">deflate</option>
<option value="br">br</option>
</select>

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Lines 16-20: The select element is correctly named "compressionType" and includes the required options (gzip, deflate, br). This matches the task requirements — keep this as-is.

Copy link

@mateacademy-ai-mentor mateacademy-ai-mentor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

Decision: REQUEST_CHANGES — there are a few critical blockers that will cause incorrect behavior or failing tests.

Why:

  • The returned filename extension is incorrect in both the multipart and urlencoded branches, violating the requirement to map gzip → .gz, deflate → .dfl, br → .br. This will break expected output and tests.
  • The Content-Disposition filename is not quoted, which can break downloads for filenames with spaces/special characters and is part of the expected behavior.
  • Stream error handling is incomplete: if a pipeline error occurs after headers are sent, the response may hang or be truncated without closing the connection.

Critical issues to fix (with specifics and how):

  1. Correct extension mapping (createServer.js):

    • Location: lines where Content-Disposition is set in both multipart and urlencoded branches (currently using .${compressionType}).
    • Why: Spec requires exact extensions .gz, .dfl, .br.
    • How: Add an explicit map and use it consistently:
      • const extMap = { gzip: '.gz', deflate: '.dfl', br: '.br' };
      • const ext = extMap[compressionType];
      • Set header with: attachment; filename="${path.basename(filename)}${ext}".
  2. Quote the filename in Content-Disposition (createServer.js):

    • Location: same header lines in both branches.
    • Why: Unquoted filenames with spaces/special chars can break parsing; tests often assert quoted filenames.
    • How: Use attachment; filename="${path.basename(filename)}${ext}" (ensure quoting in both branches).
  3. Robust stream error handling (createServer.js):

    • Location: pipeline callbacks in both multipart and urlencoded branches.
    • Why: If an error occurs after headers are sent, the client can hang or receive a partial file without the connection closing.
    • How: In the pipeline(..., (err) => { ... }) callback, when err exists:
      • If res.headersSent, call res.destroy(err) to close the connection; also consider transform.destroy(err) to propagate shutdown.
      • Optionally add 'error' listeners on the file/readable and transform streams to log and destroy res.

Important security/behavior fix (recommended):
4. Remove or harden the application/x-www-form-urlencoded fallback (createServer.js):

  • Location: urlencoded branch reading file via fs.createReadStream(path.join(__dirname, fileField)).
  • Why: This treats a client field as a server-side path, which is unsafe and not required by the task; tests expect multipart uploads.
  • How: Remove this path-reading behavior, or strictly validate against a known-safe allowlist. Prefer supporting only multipart file uploads as specified.

Minor improvement (non-blocking):

  • Instantiate Busboy with new Busboy({ headers: req.headers }) for compatibility across versions.

You’ve done solid work on streaming and zlib usage—once you fix the filename mapping/quoting and tighten error handling (and remove the urlencoded file-path fallback), the implementation should fully meet the requirements. Keep going—you’re very close!


✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨

Important

Fix the issues, commit the changes, and then re-request my review using the Re-request review button2 in the Reviewers tab.

Tip

If you have any comments about this review, please leave a message in the Add a comment section below and mention me with the @mateacademy-ai-mentor tag3.
I will read your message during my next review of your pull request. Please do not reply directly to my comments, as that feature is not supported yet.

Caution

Do not close the pull request until the review is complete. Closing it early can lead to unexpected behavior and disrupt the review process.

Footnotes

  1. Rate AI review example

  2. Re-request AI review example

  3. Add AI review comments example

Comment on lines 86 to 89
res.setHeader(
'Content-Disposition',
`attachment; filename=${path.basename(filename)}.${compressionType}`,
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect output filename extension mapping. The code composes the filename as filename.${compressionType} (for example file.txt.gzip) in the multipart branch. The task requires explicit mapping: gzip -> .gz, deflate -> .dfl, br -> .br. Compute an ext value from compressionType and use that when setting Content-Disposition. Example: const ext = compressionType === 'gzip' ? '.gz' : compressionType === 'deflate' ? '.dfl' : '.br'; Then set attachment; filename="${path.basename(filename)}${ext}".

Comment on lines 171 to 174
res.setHeader(
'Content-Disposition',
`attachment; filename=${path.basename(fileField)}.${compressionType}`,
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same filename extension issue in the urlencoded branch: the header uses .${compressionType}. Use the same explicit extension mapping and quoted filename here as in the multipart branch so the returned filename matches required .gz, .dfl, .br.

Comment on lines 86 to 89
res.setHeader(
'Content-Disposition',
`attachment; filename=${path.basename(filename)}.${compressionType}`,
);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Content-Disposition value is not quoted. If a filename contains spaces or special characters, the header can break. Wrap the filename in quotes: attachment; filename=\"${path.basename(name)}${ext}\". Apply this change in both multipart and urlencoded branches.

Comment on lines 93 to 99
pipeline(pass, transform, res, (err) => {
if (err && !res.headersSent) {
res.statusCode = 500;
res.setHeader('Content-Type', 'text/plain');
res.end('Compression error');
}
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pipeline error handling improvement. The current pipeline callback sets a 500 only if headers have not been sent. If an error happens after headers were already sent, the response may be left open or truncated. In the pipeline callback or via 'error' listeners on streams, ensure you res.destroy(err) or transform.destroy(err) to close the connection and avoid hanging clients. Consider also logging the error for debugging.

Comment on lines 176 to 182
pipeline(read, transform, res, (err) => {
if (err && !res.headersSent) {
res.statusCode = 500;
res.setHeader('Content-Type', 'text/plain');
res.end('Compression error');
}
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same pipeline error handling issue in the urlencoded branch — ensure you destroy/end the response on stream errors so the client doesn’t hang or receive a partial file. Use res.destroy(err) or similar cleanup in the pipeline callback when err exists.

let filename = null;
let started = false;

const busboy = Busboy({ headers: req.headers });

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Busboy instantiation style: you use const busboy = Busboy({ headers: req.headers });. It is more typical and safer across versions to call new Busboy({ headers: req.headers }). Consider changing to the new form to match common examples.

Comment on lines 158 to 160

const filePath = path.join(__dirname, fileField);
const read = fs.createReadStream(filePath);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The urlencoded fallback treats the posted file field as a server-side file path and does fs.createReadStream(path.join(__dirname, fileField)). This is unnecessary for the upload flow required by the task and could expose server files. Recommend removing this fallback or validating it strictly. Tests generally expect multipart uploads; relying on a posted filename that maps to a disk file is unexpected.

src/index.js Outdated
console.log('Server started! 🚀');
console.log('Available at http://localhost:5700');
});
const server = createServer();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The createServer() call is correct and required — it constructs the server instance returned to index.js. No change needed here unless you want to pass configuration or perform additional setup before listening.

src/index.js Outdated
});
const server = createServer();

server.listen(PORT);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Optional: add a callback or logging when calling server.listen(PORT) to confirm the server started, for example server.listen(PORT, () => console.log(Listening on ${PORT})). This is useful during development but not required by the task.

Copy link

@mateacademy-ai-mentor mateacademy-ai-mentor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary

Educational approval: You’ve got the core flow working — nice job using Node streams, zlib transforms, and multiparty to stream compression without buffering the whole file. That’s a solid foundation.

Critical issues to address next:

  • src/createServer.js (SUPPORTED_TYPES): The extensions don’t match the spec. Requirement says gzip → .gz, deflate → .dfl, br → .br. Using .gzip/.deflate will fail filename checks; update the map accordingly.
  • src/createServer.js (GET / and /index.html): You set Content-Type: text/html but return an empty body. The requirement expects an HTML upload form. Stream public/index.html or let your static file handler serve it by removing the special-case.
  • src/createServer.js (POST /compress header validation): Explicitly check req.headers['content-type'] starts with multipart/form-data; otherwise return 400. This makes behavior predictable per the checklist.
  • src/createServer.js (originalFilename validation): Before using path.basename(file.originalFilename), verify file.originalFilename is present; return 400 if missing to satisfy the checklist.
  • src/createServer.js (Content-Disposition): Quote the filename and use the corrected extension: attachment; filename="${compressedFilename}" to handle spaces/special characters and match tests.
  • src/createServer.js (pipeline error handling): Use 500 for unexpected compression errors, ensure temp file cleanup always runs, and if headers are already sent on error use res.destroy(err) to avoid hanging connections. Also add 'error' listeners on source and compress, and rename erroerr for clarity.

Minor polish: consistent status codes/messages and optional multiparty instantiation with { headers: req.headers }. You’re very close — tighten these details and you should pass the tests. Keep going; the streaming/compression pieces are well-done!


✨ Please rate this review in the Platform's notification center1. I will be grateful for your feedback! ✨

Footnotes

  1. Rate AI review example

Comment on lines +10 to +14
const SUPPORTED_TYPES = {
gzip: { ext: '.gzip', method: zlib.createGzip },
deflate: { ext: '.deflate', method: zlib.createDeflate },
br: { ext: '.br', method: zlib.createBrotliCompress },
};

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect extension strings — must match requirements

SUPPORTED_TYPES uses .gzip and .deflate but the task requires exact extensions .gz and .dfl (and .br for brotli). This will produce incorrect response filenames and fail tests. Update the mapping to exactly:

gzip: { ext: '.gz', method: zlib.createGzip },
deflate: { ext: '.dfl', method: zlib.createDeflate },
br: { ext: '.br', method: zlib.createBrotliCompress }

(Requirement: compression mapping: gzip → .gz, deflate → .dfl, br → .br.)

Comment on lines +22 to +31
if (
req.method === 'GET' &&
(url.pathname === '/' || url.pathname === '/index.html')
) {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/html');

res.end();

return;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GET / and /index.html returns an empty page — serve the HTML form

You set Content-Type: text/html and immediately res.end() with no body. The task requires an HTML form for uploading. Either stream public/index.html here or remove this special-case so the static-serving code later serves index.html. As-is, GET / returns an empty body which does not meet the requirement.

Comment on lines +42 to +44
const form = new multiparty.Form();

form.parse(req, (err, fields, files) => {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No explicit Content-Type validation for multipart/form-data

The requirements mandate returning 400 when POST /compress is not sent as multipart/form-data. Currently you call form.parse(req, ...) and rely on multiparty to fail. Add an explicit check on req.headers['content-type'] (e.g., startsWith 'multipart/form-data') and return 400 early if it doesn't match, so behavior is explicit and predictable.

Comment on lines +71 to +72
const originalFilename = path.basename(file.originalFilename);
const compressedFilename = originalFilename + compression.ext;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing validation of originalFilename before using it

You call path.basename(file.originalFilename) without checking that file.originalFilename exists. The checklist requires treating missing original filename as invalid form and responding with 400. Add an explicit check and return 400 if file.originalFilename is falsy before constructing compressedFilename.

Comment on lines +74 to +76
res.writeHead(200, {
'Content-Type': 'application/octet-stream',
'Content-Disposition': `attachment; filename=${compressedFilename}`,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Content-Disposition filename must be quoted and use correct extension

The header is set as attachment; filename=${compressedFilename} (unquoted). Filenames containing spaces or special characters require quoting and tests commonly expect a quoted filename. Also ensure compressedFilename uses the corrected extension from SUPPORTED_TYPES. Example:

'Content-Disposition': `attachment; filename="${compressedFilename}"`

Apply this change when you correct the extension mapping.

Comment on lines +82 to +89
pipeline(source, compress, res, (erro) => {
fs.unlink(file.path, () => {});

if (erro) {
res.writeHead(400);
res.end('Compression error');
}
});

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pipeline error handling is fragile — close response when headers already sent; use 500 for server errors

In the pipeline callback you attempt to send a 400 and res.end on error. If headers were already sent this will fail and may leave the client hanging. Recommendations:

  • Use if (res.headersSent) { return res.destroy(err); } to close the connection when streaming has started.
  • Use HTTP 500 for unexpected compression/server errors rather than 400.
  • Ensure the temporary uploaded file is unlinked in all cases (success and error).
  • Add 'error' listeners on source and compress to destroy response and streams on error.
  • Rename erro to err for clarity.

Example pattern:

pipeline(source, compress, res, (err) => {
  fs.unlink(file.path, () => {});
  if (err) {
    if (res.headersSent) {
      res.destroy(err);
    } else {
      res.writeHead(500);
      res.end('Compression error');
    }
  }
});

This ensures the connection does not hang and errors are signaled with appropriate status codes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants