Skip to content

Comments

BIP-352: introduce per-group recipient limit K_max (=1000)#2106

Open
theStack wants to merge 3 commits intobitcoin:masterfrom
theStack:bip352_limit_max-k-PR
Open

BIP-352: introduce per-group recipient limit K_max (=1000)#2106
theStack wants to merge 3 commits intobitcoin:masterfrom
theStack:bip352_limit_max-k-PR

Conversation

@theStack
Copy link
Contributor

As previously announced in the libsecp256k1 silentpayments module discussion and on the mailing list ~2 weeks ago, this PR introduces a K_max limit to BIP-352 to mitigate worst-case scanning time for adversarial transactions. So far no objections have been raised against that protocol change, which is backwards-incompatible in theory.

The Python reference implementation is adapted and test vectors for the following two scenarios are added:

  • Sending fails if any recipient group (i.e. recipients that share the same scan public key) exceeds the limit
  • Scanning only detects outputs with k values up to K_max (exclusive, i.e. from k=0..999), even if more would be available

Can be tested by running $ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json. The test vectors pass the tests in the libsecp256k1 module as well, see commit theStack/secp256k1@7ae84f8. [1]

Note that the value K_max=1000 is somewhat arbitrary and can still be debated, it's simply one that came up relatively early in the discussion and is believed to be fine even for very unusual scenarios like "exchange sends to another exchange" (see bitcoin-core/secp256k1#1799 (comment)). An alternative number that was brought up later was K_max=2324, as this is the theoretical maximum for standard-sized (i.e. <= 100_000 kvB) transactions (bitcoin-core/secp256k1#1799 (comment)). Personally I think even a lower limit than K_max=1000 would be fine, like something in the "lower hundreds" range, but with 1000 we are still comfortably in the "seconds" range for the worst-case on a reasonably modern machine. See benchmark results bitcoin-core/secp256k1#1765 (comment) for more details.

Another suggested BIP change is introducing a warning about wallet interoperability w.r.t. the number of labels created. As I see this as an orthogonal change, I will open a separate PR for this. One slightly related PR is #2087, which would standardize the EC parts of the reference code and make it look a bit nicer. It's not strictly necessary, but I'm happy to rebase this PR if #2087 gets in first.

[1] I haven't included this commit in the secp PR yet as it's unclear to me if we want to bloat the repository with >120k new LOC and several mega-bytes of additional test data, considering a test case for this already exists. Something to be discussed further in bitcoin-core/secp256k1#1765.

@murchandamus
Copy link
Member

murchandamus commented Feb 19, 2026

I guess I misunderstood so far what kind of limit you were looking to impose. I thought that Silent Payment transactions would generally be limited to 1000 outputs.

If a malicious party were to send 1000 outputs to two recipients, they’d incur a 2×1000² validation time on two recipients, or a 2²×1000² total validation time. If they make an oversized transaction and send 1000 outputs each to 23 recipients, each of the 23 recipients would have 23×1000² total validation time. So, the total validation time is still quadratic, but distributed to multiple recipients, but it gets worse for each recipient the more recipients there are. Wouldn’t it be better to just limit silent payment transactions to 1000 outputs? It’s not like that would make Silent Payment transactions stick out like a sore thumb. ;)

@theStack
Copy link
Contributor Author

theStack commented Feb 20, 2026

I guess I misunderstood so far what kind of limit you were looking to impose. I thought that Silent Payment transactions would generally be limited to 1000 outputs.

Limiting the total number of transaction outputs for Silent Payments eligibility might be a possible alternative, but it seems to be more restrictive in terms of composability. There is currently no rule that in a Silent Payments transaction all (taproot) outputs have to be Silent Payments related, users are free to combine it with an arbitrary number of non-Silent-Payments outputs. Considering this, can we be sure that a total tx output limit would never cause unintended problems in any wallet flow? It could e.g. happen that a user first creates exactly N_max [1] outputs with a Silent Payments library, and then some collaborative protocol adds another unrelated taproot output on top, without necessarily being aware that the already existing outputs even have a Silent Payments context (they just look like random taproot outputs). In that scenario the Silent Payments recipients wouldn't get paid, as the resulting transaction isn't considered Silent Payments eligible, due to having N_max+1 outputs and thus exceeding the limit. (Slightly related: #2055 (comment)).

That said, I'm not too familiar with concrete collaboration protocols and flows, so the scenario I painted might also be a bit far-fetched, especially since it must be a single entity having control over all the inputs private keys (as otherwise it couldn't have created the Silent Payments outputs in the first place). When thinking about simpler solutions, one obvious idea would be to limit the size of Silent Payments eligible transactions to 100 kvB. No matter what protocols are in use, it seems at least very unlikely to me that any of them would at the end not respect the current policy limit and create a tx that would likely not be relayed and need out-of-band miner communication. On the other hand, no policy limit is set in stone, so in some sense even that would potentially restrict flexibility for the future. 🤷‍♂️
I remember having discussed this with @RubenSomsen a while ago, who likely has more convincing arguments for K_max.

[1] using the N_max notion for your "limit total outputs" proposal to differentiate it from the K_max one

@murchandamus murchandamus added Proposed BIP modification Pending acceptance This BIP modification requires sign-off by the champion of the BIP being modified labels Feb 20, 2026
@murchandamus
Copy link
Member

cc: @josibake, @RubenSomsen

@Sosthene00
Copy link

Hi everyone, sorry I just caught up on this.

First, I concept ACK a limit on k.

But as I said when we first talked about this on a call a couple months ago I consider this a rather theoretical problem since afaiu the incentives for the supposed attacker are just not there at all, i.e. attacker will spend a lot of money just to slow down transaction scanning for the victim. Even worse, he's actually paying the victim a significant amount. I wish I could be attacked like this more often :)

More seriously I agree on a reasonable restriction to prevent worst cases. Limiting k to 1000 seems reasonable to me, that's a number of outputs for the same recipient we're very unlikely to ever hit in practice and it gives us a clear upper bound of how long scanning can take. That doesn't interfere with policy like other proposals, that's purely a cap on a silent payment parameter, and it can be easily bypassed/reversed later if needed, even if I doubt we'll ever need to.

* Let ''input_hash = hash<sub>BIP0352/Inputs</sub>(outpoint<sub>L</sub> || A)'', where ''outpoint<sub>L</sub>'' is the smallest ''outpoint'' lexicographically used in the transaction<ref name="why_smallest_outpoint"></ref> and ''A = a·G''
** If ''input_hash'' is not a valid scalar, i.e., if ''input_hash = 0'' or ''input_hash'' is larger or equal to the secp256k1 group order, fail
* Group receiver silent payment addresses by ''B<sub>scan</sub>'' (e.g. each group consists of one ''B<sub>scan</sub>'' and one or more ''B<sub>m</sub>'')
* If any of the groups exceed the limit of ''K<sub>max</sub>'' (=1000) silent payment addresses, fail.<ref name="why_limit_k">'''Why is the size of groups (i.e. silent payment addresses sharing the same scan public key) limited by ''K<sub>max</sub>''?''' An adversary could construct a large full-block-sized transaction consisting of N=23255 outputs (that's the theoretical maximum under current consensus rules, w.r.t. the block weight limit) that all target the same entity, consisting of one large group. Without a limit on the group size, scanning such a transaction with the algorithm described in this document would have a complexity of ''O(N<sup>2</sup>)'' for that entity, taking several minutes on modern systems. By limiting the number of "k" iterations in the inner loop, the complexity is reduced to ''O(N·K<sub>max</sub>)'', cutting down the scanning cost to the order of tens of seconds instead.</ref>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider:
...By limiting the group size to 'Kmax', the number of iterations of the inner loop is reduced to 'Kmax', and the complexity is reduced to...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, restructured that part based on your suggestion.

In theory this is a backwards incompatible protocol change.
Practically, no existing Silent Payments wallets out there supports
sending to such a high quantity of recipients (not even in terms of
_total_ number of recipients), so the K_max limit should be safe to
introduce, without any negative effects in the wallet ecosystem.
Test case: as the (only) recipient group contains 1001 addresses and
thus exceeds the K_max limit by one, sending fails.

Can be tested by
`$ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json`
Test case: even though there are 1001 outputs targeted to the recipient,
only 1000 are found due to the introduced K_max limit. Any
implementation following the new BIP protocol rule wouldn't create such
a transaction in the first place, but an attacker might do.

Can be tested by
`$ ./bip-0352/reference.py ./bip-0352/send_and_receive_test_vectors.json`
@theStack theStack force-pushed the bip352_limit_max-k-PR branch from 21a297e to d1f061b Compare February 24, 2026 14:11
@theStack
Copy link
Contributor Author

@Sosthene00: Thanks for weighing in! I agree that it's a very unusual attack with relatively weak incentives, probably even calling it "attack" in the first place seems a bit exaggerated. Note though that the recipient doesn't necessarily get paid, as there is currently nothing in the protocol mandating a minimum output amount. An attacker could include e.g. one large non-Silent-Payments output as change to themselves, and all the other Silent-Payments outputs that match for the recipient have 0 sats (being below the dust limit, that's non-standard, but still consensus valid). Silent Payments wallets would still want to scan such a transaction, considering that the single high-amount output could be a match.

@RubenSomsen
Copy link
Contributor

@murchandamus raises a good point that ~23 users can be targeted simultaneously every 10 minutes, rather than one (or e.g., at k=500, ~46). This mainly makes it cheaper for an attacker because it only requires 1/23rd of a full block worth of fees per victim.

That said, I don't think this is a major concern. Limiting k is mainly making it computationally feasible for an individual user to deal with being targeted regardless of the whims of the block space market. Furthermore, it remains the case that for a targeted attack to occur (regardless whether 1 or 23 people are targeted), it requires outbidding everyone else for the entire block.

If we're looking at the effect on the total set of users, during a targeted attack the combined scanning burden actually goes down. A block with a single targeted worst-case transaction is ~5x cheaper to scan for non-targeted people (expensive EC mults are only done per TX, so fewer TXs = faster). The most effective way to maximize the combined scanning burden is not with a targeted attack, but by filling the block with 1-input 1-output taproot transactions.

The reasons why we'd prefer limiting K has already been mentioned, but my generalized take is that we want to minimize the number of constraints we put on SP transactions to ensure no incompatibilities occur with other protocols (including future ones).

And for those unaware, @theStack was appointed co-author of BIP352, so his voice carries no less weight than my own.

@murchandamus
Copy link
Member

Understood, thanks for the reminder. I opened a PR to add theStack to BIP352 in #2107.
Regarding this change, please let me know whenever it’s ready to merge, @theStack.

@Sosthene00
Copy link

@Sosthene00: Thanks for weighing in! I agree that it's a very unusual attack with relatively weak incentives, probably even calling it "attack" in the first place seems a bit exaggerated. Note though that the recipient doesn't necessarily get paid, as there is currently nothing in the protocol mandating a minimum output amount. An attacker could include e.g. one large non-Silent-Payments output as change to themselves, and all the other Silent-Payments outputs that match for the recipient have 0 sats (being below the dust limit, that's non-standard, but still consensus valid). Silent Payments wallets would still want to scan such a transaction, considering that the single high-amount output could be a match.

Ok the 0 sats output attack is interesting, but I don't think it does change the dynamic here.

First as you mentionned that makes the transaction non-standard, it raises the bar but I agree it's not enough of a deterrent nowadays.

Second wallets and scanners like blindbit would be very likely to ignore such outputs. For example with Dana we're still experimenting, we used to ignore all outputs barely above dust limit and even raised the bar to 1000 sats for a time, but 0 sats outputs will definitely be ignored.

Actually it makes me think that there's a problem with that we haven't talked about yet as far I know at least: if a transaction contains more than one output for a recipient but that output derived with k == 0 is below that dust value that wallet and/or blindbit choose to ignore, but say output at k == 1 is above, I'm not sure that it will be caught I'd rather check that.

Probably not an attack but it could definitely happen by mistake if wallets are not aware of that and result in failure to correctly identify our outputs.

@Eunovo
Copy link
Contributor

Eunovo commented Feb 25, 2026

Second wallets and scanners like blindbit would be very likely to ignore such outputs. For example with Dana we're still experimenting, we used to ignore all outputs barely above dust limit and even raised the bar to 1000 sats for a time, but 0 sats outputs will definitely be ignored.

@Sosthene00 I and @RubenSomsen had discussed this idea previously and it comes with some drawbacks. See bitcoin-core/secp256k1#1799 (comment)

@real-or-random
Copy link
Contributor

An alternative number that was brought up later was K_max=2324, as this is the theoretical maximum for standard-sized (i.e. <= 100_000 kvB) transactions (bitcoin-core/secp256k1#1799 (comment)).

If it should happen that we see applications hitting the K=1000 limit, then if you see a tx with ~1000 outputs (e.g., 1001 with one change output), then chances are high that it's an SP transaction. I'm not sure how real this concern is, but a limit of K_max = 2324 would mitigate this, and IMO an additional factor of ~2x doesn't matter that much given that we're in the order of seconds already. (Plus, some of @w0xlt's suggested optimizations could bring scanning time down again).

@real-or-random
Copy link
Contributor

[1] I haven't included this commit in the secp PR yet as it's unclear to me if we want to bloat the repository with >120k new LOC and several mega-bytes of additional test data, considering a test case for this already exists. Something to be discussed further in bitcoin-core/secp256k1#1765.

I tend to think that part of this discussion belongs here. If the BIP has test vectors, the simplest thing for implementations (libsecp256k1 or others) is to use them all. Treating a handful of vectors differently creates friction. But having multi-MB files in the repo may also create friction.

Perhaps a better encoding can be found? On the sending side, a "count" arg could be added. On the recipient side, we'll need the K+1 outputs but maybe it's enough to "expect" only the number of found outputs (instead of their data). I am not sure if the added implementation complexity is worth the hassle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants