Skip to content

feat: add bmt from account info#2283

Open
ananas-block wants to merge 5 commits intomainfrom
jorrit/feat-add-bmt-from-account-info
Open

feat: add bmt from account info#2283
ananas-block wants to merge 5 commits intomainfrom
jorrit/feat-add-bmt-from-account-info

Conversation

@ananas-block
Copy link
Contributor

@ananas-block ananas-block commented Feb 12, 2026

Summary by CodeRabbit

  • New Features

    • Read-only reference types for Merkle tree and queue accounts for safe immutable access.
    • Immutable Bloom filter view enabling non-inclusion checks without mutating state.
    • Generic read-only cyclic vector support (immutable backings) and new accessors.
  • Refactor

    • Verification and account-access paths updated to use read-only references and immutable slices.
  • Tests

    • Extensive new tests and test helpers covering deserialization, inclusion/non-inclusion, root history, and builders.

…ueRef

Add test helpers (builders and assertion utilities) and 29 new tests
covering deserialization, error handling, root history, bloom filters,
queue inclusion proofs, and configuration variants. Migrate manual
account construction to use builders where possible and remove dead code.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 12, 2026

📝 Walkthrough

Walkthrough

Adds immutable read-only "Ref" types (BatchedMerkleTreeRef, BatchedQueueRef, BloomFilterRef), a BloomFilterRef-backed non-inclusion check, generic immutable/mutable ZeroCopy cyclic-vector backing, many tests and builders, and updates consumers and several APIs to use immutable slices and the new Ref types.

Changes

Cohort / File(s) Summary
Read-only Merkle & Queue refs
program-libs/batched-merkle-tree/src/merkle_tree_ref.rs, program-libs/batched-merkle-tree/src/queue_ref.rs, program-libs/batched-merkle-tree/src/lib.rs
Add BatchedMerkleTreeRef<'a> and BatchedQueueRef<'a>: discriminators, constructors from account info/bytes, Deref to metadata, pubkey/root accessors, and high-level inclusion/non-inclusion helpers that read stores without mutating.
Bloom filter ref
program-libs/bloom-filter/src/lib.rs
Add BloomFilterRef<'a> with new() and contains() to perform membership checks over immutable store slices; add unit test verifying read-only view against mutable inserts.
Batch non-inclusion (ref)
program-libs/batched-merkle-tree/src/batch.rs
Add Batch::check_non_inclusion_ref(num_iters, bloom_filter_capacity, value, store) that uses BloomFilterRef on immutable store slices and returns NonInclusionCheckFailed or Ok(()).
ZeroCopy cyclic vec generic backing
program-libs/zero-copy/src/cyclic_vec.rs, program-libs/zero-copy/tests/*
Make ZeroCopyCyclicVec generic over backing B to support both &[u8] and &mut [u8]; add ZeroCopyCyclicVecRefU64 alias, read-only iterator APIs, and mutable construction/access impls; update tests for new type parameter.
Tests and test helpers
program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs, program-libs/batched-merkle-tree/tests/queue_ref.rs, program-libs/batched-merkle-tree/tests/test_helpers/*
Add extensive tests for ref types (deserialization, errors, root history, bloom checks), plus MerkleTreeAccountBuilder and QueueAccountBuilder and assertion helpers to build valid/invalid test accounts.
Consumer updates (imports & usage)
programs/registry/src/lib.rs, programs/system/src/accounts/remaining_account_checks.rs, programs/system/src/cpi_context/process_cpi_context.rs, sdk-libs/token-sdk/src/instruction/create_mints.rs
Replace imports/usages of BatchedMerkleTreeAccount/BatchedQueueAccount with BatchedMerkleTreeRef/BatchedQueueRef and update call sites to use the new ref constructors.
Immutability-driven signature changes
programs/system/src/processor/process.rs, programs/system/src/processor/read_only_account.rs, programs/system/src/processor/read_only_address.rs, programs/system/src/utils.rs, programs/system/src/processor/verify_proof.rs
Change various functions and local bindings to take/return immutable &[AcpAccount] and immutable references; update helpers to use immutable slices and use get_root_by_index for root-history access with Ref APIs.
Misc small runtime checks
program-libs/batched-merkle-tree/src/merkle_tree.rs
Add discriminator validation in an address_from_bytes path for additional runtime validation.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • sergeytimoshin

Poem

🌲 Read-only eyes upon the tree,
refs that borrow, calm and free,
bloom filters whisper "not inside",
cyclic buffers stretch both sides,
tests cheer quietly for harmony.

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (67 files):

⚔️ Cargo.lock (content)
⚔️ js/compressed-token/src/v3/instructions/mint-to.ts (content)
⚔️ program-libs/batched-merkle-tree/src/batch.rs (content)
⚔️ program-libs/batched-merkle-tree/src/lib.rs (content)
⚔️ program-libs/batched-merkle-tree/src/merkle_tree.rs (content)
⚔️ program-libs/bloom-filter/src/lib.rs (content)
⚔️ program-libs/compressed-account/src/instruction_data/zero_copy.rs (content)
⚔️ program-libs/token-interface/src/instructions/mint_action/builder.rs (content)
⚔️ program-libs/token-interface/src/instructions/mint_action/instruction_data.rs (content)
⚔️ program-libs/token-interface/src/instructions/transfer2/instruction_data.rs (content)
⚔️ program-libs/zero-copy/src/cyclic_vec.rs (content)
⚔️ program-libs/zero-copy/tests/cyclic_vec_tests.rs (content)
⚔️ program-tests/compressed-token-test/tests/compress_only/ata_decompress.rs (content)
⚔️ program-tests/compressed-token-test/tests/light_token/extensions_failing.rs (content)
⚔️ program-tests/compressed-token-test/tests/light_token/transfer_checked.rs (content)
⚔️ program-tests/compressed-token-test/tests/token_pool.rs (content)
⚔️ program-tests/compressed-token-test/tests/transfer2/no_system_program_cpi_failing.rs (content)
⚔️ program-tests/justfile (content)
⚔️ program-tests/system-test/Cargo.toml (content)
⚔️ programs/compressed-token/program/docs/compressed_token/MINT_ACTION.md (content)
⚔️ programs/compressed-token/program/docs/compressed_token/TRANSFER2.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/APPROVE.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/BURN.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/BURN_CHECKED.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/MINT_TO.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/MINT_TO_CHECKED.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/REVOKE.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/TRANSFER.md (content)
⚔️ programs/compressed-token/program/docs/ctoken/TRANSFER_CHECKED.md (content)
⚔️ programs/compressed-token/program/src/compressed_token/mint_action/actions/process_actions.rs (content)
⚔️ programs/compressed-token/program/src/compressed_token/transfer2/compression/mod.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/approve_revoke.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/burn.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/mint_to.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/transfer/checked.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/transfer/default.rs (content)
⚔️ programs/compressed-token/program/src/ctoken/transfer/shared.rs (content)
⚔️ programs/compressed-token/program/src/shared/compressible_top_up.rs (content)
⚔️ programs/compressed-token/program/tests/check_extensions.rs (content)
⚔️ programs/compressed-token/program/tests/extensions_metadata.rs (content)
⚔️ programs/compressed-token/program/tests/mint.rs (content)
⚔️ programs/registry/src/compressible/compressed_token/compress_and_close.rs (content)
⚔️ programs/registry/src/lib.rs (content)
⚔️ programs/system/src/accounts/remaining_account_checks.rs (content)
⚔️ programs/system/src/cpi_context/process_cpi_context.rs (content)
⚔️ programs/system/src/cpi_context/state.rs (content)
⚔️ programs/system/src/invoke_cpi/processor.rs (content)
⚔️ programs/system/src/processor/process.rs (content)
⚔️ programs/system/src/processor/read_only_account.rs (content)
⚔️ programs/system/src/processor/read_only_address.rs (content)
⚔️ programs/system/src/processor/verify_proof.rs (content)
⚔️ programs/system/src/utils.rs (content)
⚔️ programs/system/tests/cpi_context.rs (content)
⚔️ sdk-libs/compressed-token-sdk/src/compressed_token/v2/transfer2/instruction.rs (content)
⚔️ sdk-libs/event/tests/parse_test.rs (content)
⚔️ sdk-libs/instruction-decoder/src/programs/ctoken.rs (content)
⚔️ sdk-libs/sdk-types/src/interface/cpi/create_mints.rs (content)
⚔️ sdk-libs/sdk-types/src/interface/program/decompression/processor.rs (content)
⚔️ sdk-libs/token-pinocchio/src/instruction/transfer_from_spl.rs (content)
⚔️ sdk-libs/token-pinocchio/src/instruction/transfer_to_spl.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/burn.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/burn_checked.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/create_mints.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/mint_to.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/mint_to_checked.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/transfer.rs (content)
⚔️ sdk-libs/token-sdk/src/instruction/transfer_checked.rs (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add bmt from account info' directly reflects the PR's core contribution: introducing reference-based types (BatchedMerkleTreeRef and BatchedQueueRef) that enable reading batched Merkle tree and queue state from account info without mutability.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jorrit/feat-add-bmt-from-account-info
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch jorrit/feat-add-bmt-from-account-info
  • Create stacked PR with resolved conflicts
  • Post resolved changes as copyable diffs in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/lib.rs`:
- Around line 181-182: Reorder the pub mod declarations in lib.rs to satisfy the
project's alphabetical module order: move pub mod merkle_tree_ref and pub mod
queue_ref so they appear in alphabetical sequence relative to the other module
declarations (e.g., ensure queue, merkle_tree_ref, queue_ref are ordered
alphabetically or adjust surrounding modules so merkle_tree_ref and queue_ref
are placed correctly); update the module block containing merkle_tree_ref and
queue_ref to match the lint's expected ordering.

In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 145-159: The docstring for check_input_queue_non_inclusion is
misleading: the function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().

In `@program-libs/batched-merkle-tree/src/queue_ref.rs`:
- Around line 40-62: Change BatchedQueueRef::from_account_info to accept
program_id: &[u8; 32] (matching BatchedMerkleTreeRef::from_account_info) instead
of &Pubkey, update its use of check_account_info to pass program_id directly
(remove program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.

In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Around line 10-34: The test test_merkle_tree_ref_matches_mutable uses a
hard-coded account_data size (3376) which is fragile; replace that with a call
to the existing MerkleTreeAccountBuilder to compute the correct size (e.g., use
MerkleTreeAccountBuilder::build() or a builder method that returns the required
bytes) and pass that buffer into BatchedMerkleTreeAccount::init, or if you must
keep a literal add a comment explaining how 3376 was derived; update the
account_data initialization to use MerkleTreeAccountBuilder so future layout
changes won’t break BatchedMerkleTreeAccount::init.

In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 284-297: The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.

In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`:
- Around line 209-219: The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.

In `@program-libs/bloom-filter/src/lib.rs`:
- Around line 132-169: Add rustdoc comments to the public BloomFilterRef type
and its public methods (BloomFilterRef, BloomFilterRef::new, and
BloomFilterRef::contains) describing their purpose, parameters, return values,
and any invariants (e.g., that store.len()*8 == capacity). Follow the existing
project's doc style used by BloomFilter::calculate_bloom_filter_size for
phrasing and include examples or notes on read-only semantics and expected input
(32-byte value for contains) where appropriate.

Comment on lines +181 to +182
pub mod merkle_tree_ref;
pub mod queue_ref;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Module declarations should follow alphabetical ordering per lint.

The pipeline lint flags a module ordering issue. Currently merkle_tree_ref and queue_ref are grouped together after queue, but alphabetical ordering (which the project enforces) would place them differently.

🔧 Proposed fix for alphabetical ordering
 pub mod merkle_tree;
 pub mod merkle_tree_metadata;
+pub mod merkle_tree_ref;
 pub mod queue;
-pub mod merkle_tree_ref;
-pub mod queue_ref;
 pub mod queue_batch_metadata;
+pub mod queue_ref;
 pub mod rollover_address_tree;
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/lib.rs` around lines 181 - 182, Reorder
the pub mod declarations in lib.rs to satisfy the project's alphabetical module
order: move pub mod merkle_tree_ref and pub mod queue_ref so they appear in
alphabetical sequence relative to the other module declarations (e.g., ensure
queue, merkle_tree_ref, queue_ref are ordered alphabetically or adjust
surrounding modules so merkle_tree_ref and queue_ref are placed correctly);
update the module block containing merkle_tree_ref and queue_ref to match the
lint's expected ordering.

Comment on lines +145 to +159
/// Check non-inclusion in all bloom filters which are not zeroed.
pub fn check_input_queue_non_inclusion(
&self,
value: &[u8; 32],
) -> Result<(), BatchedMerkleTreeError> {
for i in 0..self.queue_batches.num_batches as usize {
Batch::check_non_inclusion_ref(
self.queue_batches.batches[i].num_iters as usize,
self.queue_batches.batches[i].bloom_filter_capacity,
value,
self.bloom_filter_stores[i],
)?;
}
Ok(())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc comment is slightly misleading — the code checks all bloom filters, not just "not zeroed" ones.

The comment says "Check non-inclusion in all bloom filters which are not zeroed," but the implementation unconditionally iterates all num_batches bloom filters without checking bloom_filter_is_zeroed(). This is functionally correct — a zeroed bloom filter has no bits set, so check_non_inclusion_ref will always pass on it. But the comment creates a false expectation that there's a conditional skip happening.

📝 Suggested doc fix
-    /// Check non-inclusion in all bloom filters which are not zeroed.
+    /// Check non-inclusion across all bloom filters.
+    /// Zeroed bloom filters pass trivially (no bits set).
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Check non-inclusion in all bloom filters which are not zeroed.
pub fn check_input_queue_non_inclusion(
&self,
value: &[u8; 32],
) -> Result<(), BatchedMerkleTreeError> {
for i in 0..self.queue_batches.num_batches as usize {
Batch::check_non_inclusion_ref(
self.queue_batches.batches[i].num_iters as usize,
self.queue_batches.batches[i].bloom_filter_capacity,
value,
self.bloom_filter_stores[i],
)?;
}
Ok(())
}
/// Check non-inclusion across all bloom filters.
/// Zeroed bloom filters pass trivially (no bits set).
pub fn check_input_queue_non_inclusion(
&self,
value: &[u8; 32],
) -> Result<(), BatchedMerkleTreeError> {
for i in 0..self.queue_batches.num_batches as usize {
Batch::check_non_inclusion_ref(
self.queue_batches.batches[i].num_iters as usize,
self.queue_batches.batches[i].bloom_filter_capacity,
value,
self.bloom_filter_stores[i],
)?;
}
Ok(())
}
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 145 -
159, The docstring for check_input_queue_non_inclusion is misleading: the
function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().

Comment on lines +40 to +62
impl<'a> BatchedQueueRef<'a> {
/// Deserialize an output queue (immutable) from account info.
pub fn output_from_account_info<A: AccountInfoTrait>(
account_info: &A,
) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> {
Self::from_account_info::<OUTPUT_STATE_QUEUE_TYPE_V2, A>(
&Pubkey::new_from_array(ACCOUNT_COMPRESSION_PROGRAM_ID),
account_info,
)
}

pub(crate) fn from_account_info<const QUEUE_TYPE: u64, A: AccountInfoTrait>(
program_id: &Pubkey,
account_info: &A,
) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> {
check_account_info::<BatchedQueueAccount, A>(&program_id.to_bytes(), account_info)?;
let data = account_info.try_borrow_data()?;
// SAFETY: We extend the lifetime of the borrowed data to 'a.
// The borrow is shared (immutable), so dropping the Ref guard
// restores pinocchio's borrow state correctly for shared borrows.
let data_slice: &'a [u8] = unsafe { std::slice::from_raw_parts(data.as_ptr(), data.len()) };
Self::from_bytes::<QUEUE_TYPE>(data_slice, account_info.key().into())
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor API inconsistency: program_id parameter type differs from merkle_tree_ref.rs.

In merkle_tree_ref.rs, from_account_info takes program_id: &[u8; 32], while here it takes program_id: &Pubkey. Both ultimately pass &[u8; 32] to check_account_info. This works, but the divergent signatures between sibling types can be surprising for consumers.

Consider aligning the parameter types across BatchedMerkleTreeRef::from_account_info and BatchedQueueRef::from_account_info — both are pub(crate), so the blast radius is small.

🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/queue_ref.rs` around lines 40 - 62,
Change BatchedQueueRef::from_account_info to accept program_id: &[u8; 32]
(matching BatchedMerkleTreeRef::from_account_info) instead of &Pubkey, update
its use of check_account_info to pass program_id directly (remove
program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.

Comment on lines 284 to 297
#[test]
fn test_queue_ref_pubkey() {
let (data, _expected_pubkey) = QueueAccountBuilder::output_queue().build();
let queue_ref = BatchedQueueRef::output_from_bytes(&data).unwrap();

// pubkey() should return the default pubkey (since output_from_bytes uses Pubkey::default())
// Note: The builder creates the account with a unique pubkey but output_from_bytes
// passes Pubkey::default() in the from_bytes call
assert_eq!(
*queue_ref.pubkey(),
Pubkey::default(),
"pubkey() should return default pubkey"
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

The pubkey() returning Pubkey::default() is worth a closer look.

The comment on Lines 289-291 explains that output_from_bytes passes Pubkey::default() internally, which is why the builder's unique pubkey is ignored. This is the expected behavior for the from_bytes path (as opposed to from_account_info which extracts the pubkey from the account info). The test documents this correctly, but consider whether a complementary test exercising the from_account_info path (where pubkey is extracted from the actual account) would be valuable for completeness.

🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 284 - 297,
The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.

Comment on lines +209 to +219
pub fn build_with_wrong_queue_type(self, wrong_type: u64) -> (Vec<u8>, Pubkey) {
let (mut data, pubkey) = self.build();
// In BatchedQueueMetadata, metadata is QueueMetadata which has:
// AccessMetadata (3 pubkeys = 96 bytes) + RolloverMetadata (7*u64 = 56 bytes) +
// associated_merkle_tree (32 bytes) + next_queue (32 bytes) + queue_type (8 bytes)
// Total offset from start of metadata = 96 + 56 + 32 + 32 = 216
// Plus 8 for discriminator = 224
let queue_type_offset = 8 + 96 + 56 + 32 + 32;
data[queue_type_offset..queue_type_offset + 8].copy_from_slice(&wrong_type.to_le_bytes());
(data, pubkey)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Hardcoded offset for queue_type is fragile.

The offset calculation 8 + 96 + 56 + 32 + 32 = 224 depends on the exact memory layout of QueueMetadata. If any field is added, reordered, or padded differently, this will silently corrupt the wrong bytes. Consider computing the offset programmatically, e.g., by finding the byte position of queue_type via memoffset::offset_of! or by serializing, modifying, and re-serializing.

The same concern applies to build_with_wrong_tree_type (line 116) which hardcodes tree_type_offset = 8.

🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`
around lines 209 - 219, The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.

Comment on lines +132 to +169
/// Immutable bloom filter reference for read-only access.
///
/// Uses `&'a [u8]` instead of `&'a mut [u8]` for the store,
/// enabling shared borrows of account data.
pub struct BloomFilterRef<'a> {
pub num_iters: usize,
pub capacity: u64,
pub store: &'a [u8],
}

impl<'a> BloomFilterRef<'a> {
pub fn new(
num_iters: usize,
capacity: u64,
store: &'a [u8],
) -> Result<Self, BloomFilterError> {
if store.len() * 8 != capacity as usize {
return Err(BloomFilterError::InvalidStoreCapacity);
}
Ok(Self {
num_iters,
capacity,
store,
})
}

pub fn contains(&self, value: &[u8; 32]) -> bool {
use bitvec::prelude::*;
let bits = BitSlice::<u8, Msb0>::from_slice(self.store);
for i in 0..self.num_iters {
let probe_index = BloomFilter::probe_index_keccak(value, i, &self.capacity);
if !bits[probe_index] {
return false;
}
}
true
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Well-designed immutable view — clean separation of concerns.

The BloomFilterRef struct and its contains implementation correctly mirror the read-only semantics of BloomFilter::contains while avoiding the unnecessary &mut self requirement. The membership check logic is consistent with the mutable counterpart, and the from_slice usage (vs. from_slice_mut) properly reflects the immutable borrow.

One small gap against the coding guidelines: the public methods new and contains on BloomFilterRef lack rustdoc comments. As per coding guidelines, program-libs/bloom-filter/**/*.rs should "Document public API types and methods using rustdoc comments, including… BloomFilter::calculate_bloom_filter_size (optimal bit count calculation)…" — the same standard applies to the new BloomFilterRef methods.

🤖 Prompt for AI Agents
In `@program-libs/bloom-filter/src/lib.rs` around lines 132 - 169, Add rustdoc
comments to the public BloomFilterRef type and its public methods
(BloomFilterRef, BloomFilterRef::new, and BloomFilterRef::contains) describing
their purpose, parameters, return values, and any invariants (e.g., that
store.len()*8 == capacity). Follow the existing project's doc style used by
BloomFilter::calculate_bloom_filter_size for phrasing and include examples or
notes on read-only semantics and expected input (32-byte value for contains)
where appropriate.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 119-127: The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.

In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 10-23: Replace the hard‑coded account buffer length (vec![0u8;
1000]) with a size derived from the QueueAccountBuilder (or an explicit helper)
so the test uses the same serialized layout as production; call the builder’s
size/serialized_len helper (or build a default QueueAccount via
QueueAccountBuilder and use its serialized size) and allocate account_data using
that derived length before initializing the queue structures like QueueMetadata
and queue init.

Comment on lines +119 to +127
// 3. Parse root history (cyclic vec).
let (root_history, account_data) =
ZeroCopyCyclicVecRefU64::<[u8; 32]>::from_bytes_at(account_data)?;

// 4. Parse bloom filter stores (immutable).
let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard bloom filter parsing against short buffers.
Line 125 uses split_at without a length check; malformed or truncated input can panic instead of returning a structured error. Prefer an explicit size check and return a ZeroCopyError.

🛠️ Suggested fix
-        let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
-        let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
-        let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);
+        let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
+        let total_bf_bytes = bloom_filter_size
+            .checked_mul(2)
+            .ok_or(ZeroCopyError::InvalidConversion)?;
+        if account_data.len() < total_bf_bytes {
+            return Err(
+                ZeroCopyError::InsufficientMemoryAllocated(account_data.len(), total_bf_bytes)
+                    .into(),
+            );
+        }
+        let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
+        let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 119 -
127, The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Line 6: The import statement brings in pubkey::Pubkey but Pubkey is unused and
causes a build error; update the import to remove the unused symbol (keep
TreeType) so the file imports only what it uses (i.e., remove Pubkey from the
use light_compressed_account::{...} declaration).
- Around line 185-221: Run rustfmt (cargo fmt) on the test file to fix
formatting issues (trailing whitespace/indentation) reported around the match
arms manipulating BatchedMerkleTreeAccount and BloomFilter; specifically
reformat the match block that contains the
BatchedMerkleTreeAccount::state_from_bytes calls, the BloomFilter::new/insert
sequence, and the sequence_number increment so rustfmt normalizes whitespace and
indentation and resolves the lint warnings.

In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 74-78: Replace the boolean equality assertions with idiomatic
boolean assertions: change uses of
assert_eq!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), true,
"...") and the similar assertions on lines 79-83 to
assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.

use light_batched_merkle_tree::{
merkle_tree::BatchedMerkleTreeAccount, merkle_tree_ref::BatchedMerkleTreeRef,
};
use light_compressed_account::{pubkey::Pubkey, TreeType};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Unused import pubkey::Pubkey breaks the build.

The CI fails with unused import: pubkey::Pubkey. This import isn't used anywhere in the file — all Pubkey values come from builder return types. Since Rust treats unused imports as errors in CI (likely #[deny(unused_imports)] or -D warnings), this blocks the pipeline.

🔧 Fix: remove the unused import
-use light_compressed_account::{pubkey::Pubkey, TreeType};
+use light_compressed_account::TreeType;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
use light_compressed_account::{pubkey::Pubkey, TreeType};
use light_compressed_account::TreeType;
🧰 Tools
🪛 GitHub Actions: rust

[error] 6-6: unused import: pubkey::Pubkey

🪛 GitHub Check: Test batched-merkle-tree-simulate

[failure] 6-6:
unused import: pubkey::Pubkey

🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` at line 6, The
import statement brings in pubkey::Pubkey but Pubkey is unused and causes a
build error; update the import to remove the unused symbol (keep TreeType) so
the file imports only what it uses (i.e., remove Pubkey from the use
light_compressed_account::{...} declaration).

Comment on lines +185 to +221
0 => {
// Push random root.
let mut tree_mut =
BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey)
.unwrap();
tree_mut.root_history.push(rng.gen());
}
1 => {
// Insert into bloom filter of a random batch.
let mut tree_mut =
BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey)
.unwrap();
let batch_idx = rng.gen_range(0..2usize);
let num_iters =
tree_mut.queue_batches.batches[batch_idx].num_iters as usize;
let capacity =
tree_mut.queue_batches.batches[batch_idx].bloom_filter_capacity;
let value: [u8; 32] = rng.gen();
let mut bf = BloomFilter::new(
num_iters,
capacity,
&mut tree_mut.bloom_filter_stores[batch_idx],
)
.unwrap();
bf.insert(&value).unwrap();
}
2 => {
// Increment sequence number.
let mut tree_mut =
BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey)
.unwrap();
tree_mut.sequence_number += 1;
}
_ => unreachable!(),
}

// Clone data so we can deserialize both paths independently.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Formatting lint warnings on these lines.

The static analysis (lint check) reports formatting diffs on lines 185, 193, 211, and 221. These are likely trailing whitespace or indentation mismatches that rustfmt would fix. Run cargo fmt on this file to resolve.

🧰 Tools
🪛 GitHub Check: lint

[warning] 221-221:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs


[warning] 211-211:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs


[warning] 193-193:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs


[warning] 185-185:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs

🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` around lines 185 -
221, Run rustfmt (cargo fmt) on the test file to fix formatting issues (trailing
whitespace/indentation) reported around the match arms manipulating
BatchedMerkleTreeAccount and BloomFilter; specifically reformat the match block
that contains the BatchedMerkleTreeAccount::state_from_bytes calls, the
BloomFilter::new/insert sequence, and the sequence_number increment so rustfmt
normalizes whitespace and indentation and resolves the lint warnings.

Comment on lines +74 to +78
assert_eq!(
queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
true,
"Valid index with matching hash should return true"
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Prefer assert! over assert_eq!(…, true, …) for boolean results.

Minor style point — assert_eq!(expr, true, msg) is functionally correct but slightly less idiomatic Rust. assert!(expr, msg) is more concise and produces the same panic on failure.

Same applies to lines 79-83.

✏️ Optional cleanup
-    assert_eq!(
-        queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
-        true,
-        "Valid index with matching hash should return true"
-    );
-    assert_eq!(
-        queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
-        true,
-        "Second element with matching hash should return true"
-    );
+    assert!(
+        queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
+        "Valid index with matching hash should return true"
+    );
+    assert!(
+        queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
+        "Second element with matching hash should return true"
+    );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
assert_eq!(
queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
true,
"Valid index with matching hash should return true"
);
assert!(
queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
"Valid index with matching hash should return true"
);
assert!(
queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
"Second element with matching hash should return true"
);
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 74 - 78,
Replace the boolean equality assertions with idiomatic boolean assertions:
change uses of assert_eq!(queue_ref.prove_inclusion_by_index(0,
&test_hash_1).unwrap(), true, "...") and the similar assertions on lines 79-83
to assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant