Skip to content

Conversation

@tnull
Copy link
Contributor

@tnull tnull commented Jan 12, 2026

Asked Claude to fix a few minor issues that might allow a malicious counterparty to send us unbounded data.

(cc @TheBlueMatt)

tnull added 2 commits January 12, 2026 12:44
Fix multiple issues in the sync proxy response reading loop:

1. Use extend_from_slice(&buf[..n]) instead of append(&mut buf) to only
   append the bytes that were actually read, not the entire buffer
2. Check for EOF (n == 0) to prevent infinite loops when a proxy
   doesn't close the connection
3. Add MAX_PROXY_RESPONSE_SIZE (16MB) limit to prevent unbounded memory
   allocation from a malicious/misbehaving proxy
4. Use a stack-allocated buffer instead of allocating a new Vec each
   iteration

Co-Authored-By: Claude AI
Apply the same fixes as the sync version to the async proxy response
reading loop:

1. Check for EOF (n == 0) to prevent infinite loops
2. Add MAX_PROXY_RESPONSE_SIZE (16MB) limit to prevent unbounded memory
   allocation from a malicious/misbehaving proxy
3. Use a stack-allocated buffer instead of allocating a new Vec each
   iteration

Co-Authored-By: Claude AI
@tnull tnull force-pushed the 2026-01-fix-unlimited-reads branch from 6b932d3 to 73a1fba Compare January 12, 2026 11:44
@tnull tnull changed the title Fix unbounded proxy / body reads [bitreq] Fix unbounded proxy / body reads Jan 12, 2026
Copy link
Member

@TheBlueMatt TheBlueMatt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks

Add a new `with_max_body_size` option to `Request` that limits the
maximum response body size. This prevents memory exhaustion attacks
where a malicious server returns an infinite or very large body.

The default is None (unlimited) for backwards compatibility, but users
connecting to untrusted servers should set a limit.

Co-Authored-By: Claude AI
@tnull tnull force-pushed the 2026-01-fix-unlimited-reads branch from 73a1fba to 4c45571 Compare January 12, 2026 12:58
@tnull tnull requested a review from TheBlueMatt January 12, 2026 12:59
/// accept.
///
/// If this limit is passed, the request will close the connection
/// and return an [Error::BodyOverflow] error.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, worth noting this does not apply to lazily-loaded requests (or maybe we should make it?)

Copy link
Contributor Author

@tnull tnull Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed by fe9d7ad

@tnull tnull force-pushed the 2026-01-fix-unlimited-reads branch 2 times, most recently from e26c27d to e8b1fa9 Compare January 15, 2026 12:19
tnull added 3 commits January 15, 2026 13:22
The max_body_size limit was previously only enforced when using
send() or send_async() which fully load the response body. This
change extends the limit to also apply when using send_lazy(),
which returns a ResponseLazy that streams the body byte-by-byte.

- Add max_body_size and bytes_read fields to ResponseLazy
- Pass max_body_size from Connection::send to ResponseLazy::from_stream
- Check body size limit in Iterator::next before returning each byte
- Return Error::BodyOverflow when limit is exceeded

The Read implementation automatically benefits from this since it
uses the Iterator implementation internally.

Co-Authored-By: HAL 9000
Use saturating_add instead of += when accumulating content_length in
chunked transfer encoding. This prevents integer overflow on 32-bit
systems where a malicious server could send chunk sizes that cause
the accumulated length to wrap around.

Co-Authored-By: Claude AI
@tnull tnull force-pushed the 2026-01-fix-unlimited-reads branch from e8b1fa9 to 103ed75 Compare January 15, 2026 12:22
@tnull tnull moved this to Goal: Merge in Weekly Goals Jan 15, 2026
@TheBlueMatt TheBlueMatt requested a review from tcharding January 15, 2026 18:20
@tcharding
Copy link
Member

tcharding commented Jan 15, 2026

Hey @TheBlueMatt I use the Bitcoin Core merge script (github-merge.py, can find a link if you need it) to merge PRs in this repo. Can you post an ACK <commit-hash> message so it gets included in the merge patch please? Unless you explicitly didn't want to, which is no problem since this repo basically uses the merge policy ack-merge-yolo

EDIT: Ah I'm just going to merge so you can rebase #459

Copy link
Member

@tcharding tcharding left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ACK 103ed75

@tcharding tcharding merged commit 904ed16 into rust-bitcoin:master Jan 15, 2026
30 checks passed
@github-project-automation github-project-automation bot moved this from Goal: Merge to Done in Weekly Goals Jan 15, 2026
@TheBlueMatt
Copy link
Member

Will do in the future!

@tcharding
Copy link
Member

Thanks man

tcharding added a commit that referenced this pull request Jan 16, 2026
b9db614 [bitreq] Add a changelog entry for 0.3 (Matt Corallo)
51ffafd [bitreq] Bump crate version to 0.3 (Matt Corallo)
bc4f41a [bitreq] Rename `Proxy::new` to `Proxy::new_http` (Matt Corallo)
99acf70 [bitreq] Drop mention port port 1080 which is for SOCKS, not HTTP (Matt Corallo)
d69be71 [bitreq] Make `Error` `#[non_exhaustive]` (Matt Corallo)

Pull request description:

  Based on #452 I think its reasonable to release a bitreq 0.3. There's some issues remaining in the proxy code described in #458 but those should be addressable in a point release without API breakage.

ACKs for top commit:
  tcharding:
    ACK b9db614

Tree-SHA512: 36d4660f2f37104e9e9fa2ded578b9385b23eee966ecf59de630dfb05eddeb4f1421e656bd410666eabe5708b45771ba1aa3ee8d04cc83c142b017c0ce048208
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants