Conversation
…ueRef Add test helpers (builders and assertion utilities) and 29 new tests covering deserialization, error handling, root history, bloom filters, queue inclusion proofs, and configuration variants. Migrate manual account construction to use builders where possible and remove dead code.
📝 WalkthroughWalkthroughAdds immutable read-only "Ref" types (BatchedMerkleTreeRef, BatchedQueueRef, BloomFilterRef), a BloomFilterRef-backed non-inclusion check, generic immutable/mutable ZeroCopy cyclic-vector backing, many tests and builders, and updates consumers and several APIs to use immutable slices and the new Ref types. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/lib.rs`:
- Around line 181-182: Reorder the pub mod declarations in lib.rs to satisfy the
project's alphabetical module order: move pub mod merkle_tree_ref and pub mod
queue_ref so they appear in alphabetical sequence relative to the other module
declarations (e.g., ensure queue, merkle_tree_ref, queue_ref are ordered
alphabetically or adjust surrounding modules so merkle_tree_ref and queue_ref
are placed correctly); update the module block containing merkle_tree_ref and
queue_ref to match the lint's expected ordering.
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 145-159: The docstring for check_input_queue_non_inclusion is
misleading: the function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().
In `@program-libs/batched-merkle-tree/src/queue_ref.rs`:
- Around line 40-62: Change BatchedQueueRef::from_account_info to accept
program_id: &[u8; 32] (matching BatchedMerkleTreeRef::from_account_info) instead
of &Pubkey, update its use of check_account_info to pass program_id directly
(remove program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Around line 10-34: The test test_merkle_tree_ref_matches_mutable uses a
hard-coded account_data size (3376) which is fragile; replace that with a call
to the existing MerkleTreeAccountBuilder to compute the correct size (e.g., use
MerkleTreeAccountBuilder::build() or a builder method that returns the required
bytes) and pass that buffer into BatchedMerkleTreeAccount::init, or if you must
keep a literal add a comment explaining how 3376 was derived; update the
account_data initialization to use MerkleTreeAccountBuilder so future layout
changes won’t break BatchedMerkleTreeAccount::init.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 284-297: The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`:
- Around line 209-219: The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.
In `@program-libs/bloom-filter/src/lib.rs`:
- Around line 132-169: Add rustdoc comments to the public BloomFilterRef type
and its public methods (BloomFilterRef, BloomFilterRef::new, and
BloomFilterRef::contains) describing their purpose, parameters, return values,
and any invariants (e.g., that store.len()*8 == capacity). Follow the existing
project's doc style used by BloomFilter::calculate_bloom_filter_size for
phrasing and include examples or notes on read-only semantics and expected input
(32-byte value for contains) where appropriate.
| pub mod merkle_tree_ref; | ||
| pub mod queue_ref; |
There was a problem hiding this comment.
Module declarations should follow alphabetical ordering per lint.
The pipeline lint flags a module ordering issue. Currently merkle_tree_ref and queue_ref are grouped together after queue, but alphabetical ordering (which the project enforces) would place them differently.
🔧 Proposed fix for alphabetical ordering
pub mod merkle_tree;
pub mod merkle_tree_metadata;
+pub mod merkle_tree_ref;
pub mod queue;
-pub mod merkle_tree_ref;
-pub mod queue_ref;
pub mod queue_batch_metadata;
+pub mod queue_ref;
pub mod rollover_address_tree;🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/lib.rs` around lines 181 - 182, Reorder
the pub mod declarations in lib.rs to satisfy the project's alphabetical module
order: move pub mod merkle_tree_ref and pub mod queue_ref so they appear in
alphabetical sequence relative to the other module declarations (e.g., ensure
queue, merkle_tree_ref, queue_ref are ordered alphabetically or adjust
surrounding modules so merkle_tree_ref and queue_ref are placed correctly);
update the module block containing merkle_tree_ref and queue_ref to match the
lint's expected ordering.
| /// Check non-inclusion in all bloom filters which are not zeroed. | ||
| pub fn check_input_queue_non_inclusion( | ||
| &self, | ||
| value: &[u8; 32], | ||
| ) -> Result<(), BatchedMerkleTreeError> { | ||
| for i in 0..self.queue_batches.num_batches as usize { | ||
| Batch::check_non_inclusion_ref( | ||
| self.queue_batches.batches[i].num_iters as usize, | ||
| self.queue_batches.batches[i].bloom_filter_capacity, | ||
| value, | ||
| self.bloom_filter_stores[i], | ||
| )?; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
Doc comment is slightly misleading — the code checks all bloom filters, not just "not zeroed" ones.
The comment says "Check non-inclusion in all bloom filters which are not zeroed," but the implementation unconditionally iterates all num_batches bloom filters without checking bloom_filter_is_zeroed(). This is functionally correct — a zeroed bloom filter has no bits set, so check_non_inclusion_ref will always pass on it. But the comment creates a false expectation that there's a conditional skip happening.
📝 Suggested doc fix
- /// Check non-inclusion in all bloom filters which are not zeroed.
+ /// Check non-inclusion across all bloom filters.
+ /// Zeroed bloom filters pass trivially (no bits set).📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Check non-inclusion in all bloom filters which are not zeroed. | |
| pub fn check_input_queue_non_inclusion( | |
| &self, | |
| value: &[u8; 32], | |
| ) -> Result<(), BatchedMerkleTreeError> { | |
| for i in 0..self.queue_batches.num_batches as usize { | |
| Batch::check_non_inclusion_ref( | |
| self.queue_batches.batches[i].num_iters as usize, | |
| self.queue_batches.batches[i].bloom_filter_capacity, | |
| value, | |
| self.bloom_filter_stores[i], | |
| )?; | |
| } | |
| Ok(()) | |
| } | |
| /// Check non-inclusion across all bloom filters. | |
| /// Zeroed bloom filters pass trivially (no bits set). | |
| pub fn check_input_queue_non_inclusion( | |
| &self, | |
| value: &[u8; 32], | |
| ) -> Result<(), BatchedMerkleTreeError> { | |
| for i in 0..self.queue_batches.num_batches as usize { | |
| Batch::check_non_inclusion_ref( | |
| self.queue_batches.batches[i].num_iters as usize, | |
| self.queue_batches.batches[i].bloom_filter_capacity, | |
| value, | |
| self.bloom_filter_stores[i], | |
| )?; | |
| } | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 145 -
159, The docstring for check_input_queue_non_inclusion is misleading: the
function iterates all batches unconditionally and calls
Batch::check_non_inclusion_ref for each bloom filter, it does not skip zeroed
filters; update the comment on check_input_queue_non_inclusion to state it
checks every bloom filter (including zeroed ones) or remove the "which are not
zeroed" phrase and optionally reference that zeroed filters trivially pass
check_non_inclusion_ref rather than implying a conditional skip via
bloom_filter_is_zeroed().
| impl<'a> BatchedQueueRef<'a> { | ||
| /// Deserialize an output queue (immutable) from account info. | ||
| pub fn output_from_account_info<A: AccountInfoTrait>( | ||
| account_info: &A, | ||
| ) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> { | ||
| Self::from_account_info::<OUTPUT_STATE_QUEUE_TYPE_V2, A>( | ||
| &Pubkey::new_from_array(ACCOUNT_COMPRESSION_PROGRAM_ID), | ||
| account_info, | ||
| ) | ||
| } | ||
|
|
||
| pub(crate) fn from_account_info<const QUEUE_TYPE: u64, A: AccountInfoTrait>( | ||
| program_id: &Pubkey, | ||
| account_info: &A, | ||
| ) -> Result<BatchedQueueRef<'a>, BatchedMerkleTreeError> { | ||
| check_account_info::<BatchedQueueAccount, A>(&program_id.to_bytes(), account_info)?; | ||
| let data = account_info.try_borrow_data()?; | ||
| // SAFETY: We extend the lifetime of the borrowed data to 'a. | ||
| // The borrow is shared (immutable), so dropping the Ref guard | ||
| // restores pinocchio's borrow state correctly for shared borrows. | ||
| let data_slice: &'a [u8] = unsafe { std::slice::from_raw_parts(data.as_ptr(), data.len()) }; | ||
| Self::from_bytes::<QUEUE_TYPE>(data_slice, account_info.key().into()) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Minor API inconsistency: program_id parameter type differs from merkle_tree_ref.rs.
In merkle_tree_ref.rs, from_account_info takes program_id: &[u8; 32], while here it takes program_id: &Pubkey. Both ultimately pass &[u8; 32] to check_account_info. This works, but the divergent signatures between sibling types can be surprising for consumers.
Consider aligning the parameter types across BatchedMerkleTreeRef::from_account_info and BatchedQueueRef::from_account_info — both are pub(crate), so the blast radius is small.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/queue_ref.rs` around lines 40 - 62,
Change BatchedQueueRef::from_account_info to accept program_id: &[u8; 32]
(matching BatchedMerkleTreeRef::from_account_info) instead of &Pubkey, update
its use of check_account_info to pass program_id directly (remove
program_id.to_bytes()), and update callers such as
BatchedQueueRef::output_from_account_info to pass
&ACCOUNT_COMPRESSION_PROGRAM_ID (or the &[u8;32] constant) instead of creating a
Pubkey; refer to the function name BatchedQueueRef::from_account_info, the
caller output_from_account_info, and the
check_account_info::<BatchedQueueAccount, A> call when making these edits.
| #[test] | ||
| fn test_queue_ref_pubkey() { | ||
| let (data, _expected_pubkey) = QueueAccountBuilder::output_queue().build(); | ||
| let queue_ref = BatchedQueueRef::output_from_bytes(&data).unwrap(); | ||
|
|
||
| // pubkey() should return the default pubkey (since output_from_bytes uses Pubkey::default()) | ||
| // Note: The builder creates the account with a unique pubkey but output_from_bytes | ||
| // passes Pubkey::default() in the from_bytes call | ||
| assert_eq!( | ||
| *queue_ref.pubkey(), | ||
| Pubkey::default(), | ||
| "pubkey() should return default pubkey" | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
The pubkey() returning Pubkey::default() is worth a closer look.
The comment on Lines 289-291 explains that output_from_bytes passes Pubkey::default() internally, which is why the builder's unique pubkey is ignored. This is the expected behavior for the from_bytes path (as opposed to from_account_info which extracts the pubkey from the account info). The test documents this correctly, but consider whether a complementary test exercising the from_account_info path (where pubkey is extracted from the actual account) would be valuable for completeness.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 284 - 297,
The test test_queue_ref_pubkey correctly asserts that
BatchedQueueRef::output_from_bytes returns Pubkey::default(), but add a
complementary test that constructs an AccountInfo (or uses
QueueAccountBuilder::output_queue().build() account info) and calls
BatchedQueueRef::output_from_account_info (or from_account_info) to ensure the
pubkey is extracted from the account info; locate and update tests around
QueueAccountBuilder::output_queue, BatchedQueueRef::output_from_bytes and the
from_account_info method to create an account with a unique pubkey and assert
*queue_ref.pubkey() == that unique pubkey.
| pub fn build_with_wrong_queue_type(self, wrong_type: u64) -> (Vec<u8>, Pubkey) { | ||
| let (mut data, pubkey) = self.build(); | ||
| // In BatchedQueueMetadata, metadata is QueueMetadata which has: | ||
| // AccessMetadata (3 pubkeys = 96 bytes) + RolloverMetadata (7*u64 = 56 bytes) + | ||
| // associated_merkle_tree (32 bytes) + next_queue (32 bytes) + queue_type (8 bytes) | ||
| // Total offset from start of metadata = 96 + 56 + 32 + 32 = 216 | ||
| // Plus 8 for discriminator = 224 | ||
| let queue_type_offset = 8 + 96 + 56 + 32 + 32; | ||
| data[queue_type_offset..queue_type_offset + 8].copy_from_slice(&wrong_type.to_le_bytes()); | ||
| (data, pubkey) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded offset for queue_type is fragile.
The offset calculation 8 + 96 + 56 + 32 + 32 = 224 depends on the exact memory layout of QueueMetadata. If any field is added, reordered, or padded differently, this will silently corrupt the wrong bytes. Consider computing the offset programmatically, e.g., by finding the byte position of queue_type via memoffset::offset_of! or by serializing, modifying, and re-serializing.
The same concern applies to build_with_wrong_tree_type (line 116) which hardcodes tree_type_offset = 8.
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/test_helpers/account_builders.rs`
around lines 209 - 219, The test currently hardcodes byte offsets in
build_with_wrong_queue_type (and similarly build_with_wrong_tree_type) which is
brittle; instead compute the offsets programmatically: add/import
memoffset::offset_of!, compute the queue_type offset as discriminator_size (8) +
offset_of!(BatchedQueueMetadata, metadata) + offset_of!(QueueMetadata,
queue_type), and write wrong_type at that computed index; for tree_type compute
discriminator_size + offset_of!(YourAccountStruct, tree_type) (or the correct
nested path) similarly. Use the exact struct names seen in the code
(build_with_wrong_queue_type, build_with_wrong_tree_type, BatchedQueueMetadata,
QueueMetadata, tree_type) so the test locates fields reliably rather than using
magic numbers.
| /// Immutable bloom filter reference for read-only access. | ||
| /// | ||
| /// Uses `&'a [u8]` instead of `&'a mut [u8]` for the store, | ||
| /// enabling shared borrows of account data. | ||
| pub struct BloomFilterRef<'a> { | ||
| pub num_iters: usize, | ||
| pub capacity: u64, | ||
| pub store: &'a [u8], | ||
| } | ||
|
|
||
| impl<'a> BloomFilterRef<'a> { | ||
| pub fn new( | ||
| num_iters: usize, | ||
| capacity: u64, | ||
| store: &'a [u8], | ||
| ) -> Result<Self, BloomFilterError> { | ||
| if store.len() * 8 != capacity as usize { | ||
| return Err(BloomFilterError::InvalidStoreCapacity); | ||
| } | ||
| Ok(Self { | ||
| num_iters, | ||
| capacity, | ||
| store, | ||
| }) | ||
| } | ||
|
|
||
| pub fn contains(&self, value: &[u8; 32]) -> bool { | ||
| use bitvec::prelude::*; | ||
| let bits = BitSlice::<u8, Msb0>::from_slice(self.store); | ||
| for i in 0..self.num_iters { | ||
| let probe_index = BloomFilter::probe_index_keccak(value, i, &self.capacity); | ||
| if !bits[probe_index] { | ||
| return false; | ||
| } | ||
| } | ||
| true | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Well-designed immutable view — clean separation of concerns.
The BloomFilterRef struct and its contains implementation correctly mirror the read-only semantics of BloomFilter::contains while avoiding the unnecessary &mut self requirement. The membership check logic is consistent with the mutable counterpart, and the from_slice usage (vs. from_slice_mut) properly reflects the immutable borrow.
One small gap against the coding guidelines: the public methods new and contains on BloomFilterRef lack rustdoc comments. As per coding guidelines, program-libs/bloom-filter/**/*.rs should "Document public API types and methods using rustdoc comments, including… BloomFilter::calculate_bloom_filter_size (optimal bit count calculation)…" — the same standard applies to the new BloomFilterRef methods.
🤖 Prompt for AI Agents
In `@program-libs/bloom-filter/src/lib.rs` around lines 132 - 169, Add rustdoc
comments to the public BloomFilterRef type and its public methods
(BloomFilterRef, BloomFilterRef::new, and BloomFilterRef::contains) describing
their purpose, parameters, return values, and any invariants (e.g., that
store.len()*8 == capacity). Follow the existing project's doc style used by
BloomFilter::calculate_bloom_filter_size for phrasing and include examples or
notes on read-only semantics and expected input (32-byte value for contains)
where appropriate.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs`:
- Around line 119-127: The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 10-23: Replace the hard‑coded account buffer length (vec![0u8;
1000]) with a size derived from the QueueAccountBuilder (or an explicit helper)
so the test uses the same serialized layout as production; call the builder’s
size/serialized_len helper (or build a default QueueAccount via
QueueAccountBuilder and use its serialized size) and allocate account_data using
that derived length before initializing the queue structures like QueueMetadata
and queue init.
| // 3. Parse root history (cyclic vec). | ||
| let (root_history, account_data) = | ||
| ZeroCopyCyclicVecRefU64::<[u8; 32]>::from_bytes_at(account_data)?; | ||
|
|
||
| // 4. Parse bloom filter stores (immutable). | ||
| let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes(); | ||
| let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size); | ||
| let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size); | ||
|
|
There was a problem hiding this comment.
Guard bloom filter parsing against short buffers.
Line 125 uses split_at without a length check; malformed or truncated input can panic instead of returning a structured error. Prefer an explicit size check and return a ZeroCopyError.
🛠️ Suggested fix
- let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
- let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
- let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);
+ let bloom_filter_size = metadata.queue_batches.get_bloomfilter_size_bytes();
+ let total_bf_bytes = bloom_filter_size
+ .checked_mul(2)
+ .ok_or(ZeroCopyError::InvalidConversion)?;
+ if account_data.len() < total_bf_bytes {
+ return Err(
+ ZeroCopyError::InsufficientMemoryAllocated(account_data.len(), total_bf_bytes)
+ .into(),
+ );
+ }
+ let (bf_store_0, account_data) = account_data.split_at(bloom_filter_size);
+ let (bf_store_1, _account_data) = account_data.split_at(bloom_filter_size);🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/src/merkle_tree_ref.rs` around lines 119 -
127, The bloom filter parsing currently calls
account_data.split_at(bloom_filter_size) without validating buffer length, which
can panic on truncated input; update the code around
ZeroCopyCyclicVecRefU64::from_bytes_at and the subsequent bf_store_0/bf_store_1
parsing to first check that account_data.len() >=
bloom_filter_size.checked_mul(2).ok_or(ZeroCopyError::ShortBuffer)? (or
equivalent) and if not return Err(ZeroCopyError::ShortBuffer), then safely
slice/split the two bloom filter stores (bf_store_0, bf_store_1); reference the
ZeroCopyCyclicVecRefU64::from_bytes_at call and the bf_store_0/bf_store_1
variables to locate the fix.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs`:
- Line 6: The import statement brings in pubkey::Pubkey but Pubkey is unused and
causes a build error; update the import to remove the unused symbol (keep
TreeType) so the file imports only what it uses (i.e., remove Pubkey from the
use light_compressed_account::{...} declaration).
- Around line 185-221: Run rustfmt (cargo fmt) on the test file to fix
formatting issues (trailing whitespace/indentation) reported around the match
arms manipulating BatchedMerkleTreeAccount and BloomFilter; specifically
reformat the match block that contains the
BatchedMerkleTreeAccount::state_from_bytes calls, the BloomFilter::new/insert
sequence, and the sequence_number increment so rustfmt normalizes whitespace and
indentation and resolves the lint warnings.
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs`:
- Around line 74-78: Replace the boolean equality assertions with idiomatic
boolean assertions: change uses of
assert_eq!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), true,
"...") and the similar assertions on lines 79-83 to
assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.
| use light_batched_merkle_tree::{ | ||
| merkle_tree::BatchedMerkleTreeAccount, merkle_tree_ref::BatchedMerkleTreeRef, | ||
| }; | ||
| use light_compressed_account::{pubkey::Pubkey, TreeType}; |
There was a problem hiding this comment.
Unused import pubkey::Pubkey breaks the build.
The CI fails with unused import: pubkey::Pubkey. This import isn't used anywhere in the file — all Pubkey values come from builder return types. Since Rust treats unused imports as errors in CI (likely #[deny(unused_imports)] or -D warnings), this blocks the pipeline.
🔧 Fix: remove the unused import
-use light_compressed_account::{pubkey::Pubkey, TreeType};
+use light_compressed_account::TreeType;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use light_compressed_account::{pubkey::Pubkey, TreeType}; | |
| use light_compressed_account::TreeType; |
🧰 Tools
🪛 GitHub Actions: rust
[error] 6-6: unused import: pubkey::Pubkey
🪛 GitHub Check: Test batched-merkle-tree-simulate
[failure] 6-6:
unused import: pubkey::Pubkey
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` at line 6, The
import statement brings in pubkey::Pubkey but Pubkey is unused and causes a
build error; update the import to remove the unused symbol (keep TreeType) so
the file imports only what it uses (i.e., remove Pubkey from the use
light_compressed_account::{...} declaration).
| 0 => { | ||
| // Push random root. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| tree_mut.root_history.push(rng.gen()); | ||
| } | ||
| 1 => { | ||
| // Insert into bloom filter of a random batch. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| let batch_idx = rng.gen_range(0..2usize); | ||
| let num_iters = | ||
| tree_mut.queue_batches.batches[batch_idx].num_iters as usize; | ||
| let capacity = | ||
| tree_mut.queue_batches.batches[batch_idx].bloom_filter_capacity; | ||
| let value: [u8; 32] = rng.gen(); | ||
| let mut bf = BloomFilter::new( | ||
| num_iters, | ||
| capacity, | ||
| &mut tree_mut.bloom_filter_stores[batch_idx], | ||
| ) | ||
| .unwrap(); | ||
| bf.insert(&value).unwrap(); | ||
| } | ||
| 2 => { | ||
| // Increment sequence number. | ||
| let mut tree_mut = | ||
| BatchedMerkleTreeAccount::state_from_bytes(&mut account_data, &pubkey) | ||
| .unwrap(); | ||
| tree_mut.sequence_number += 1; | ||
| } | ||
| _ => unreachable!(), | ||
| } | ||
|
|
||
| // Clone data so we can deserialize both paths independently. |
There was a problem hiding this comment.
Formatting lint warnings on these lines.
The static analysis (lint check) reports formatting diffs on lines 185, 193, 211, and 221. These are likely trailing whitespace or indentation mismatches that rustfmt would fix. Run cargo fmt on this file to resolve.
🧰 Tools
🪛 GitHub Check: lint
[warning] 221-221:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 211-211:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 193-193:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
[warning] 185-185:
Diff in /home/runner/work/light-protocol/light-protocol/program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/merkle_tree_ref.rs` around lines 185 -
221, Run rustfmt (cargo fmt) on the test file to fix formatting issues (trailing
whitespace/indentation) reported around the match arms manipulating
BatchedMerkleTreeAccount and BloomFilter; specifically reformat the match block
that contains the BatchedMerkleTreeAccount::state_from_bytes calls, the
BloomFilter::new/insert sequence, and the sequence_number increment so rustfmt
normalizes whitespace and indentation and resolves the lint warnings.
| assert_eq!( | ||
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | ||
| true, | ||
| "Valid index with matching hash should return true" | ||
| ); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer assert! over assert_eq!(…, true, …) for boolean results.
Minor style point — assert_eq!(expr, true, msg) is functionally correct but slightly less idiomatic Rust. assert!(expr, msg) is more concise and produces the same panic on failure.
Same applies to lines 79-83.
✏️ Optional cleanup
- assert_eq!(
- queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
- true,
- "Valid index with matching hash should return true"
- );
- assert_eq!(
- queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
- true,
- "Second element with matching hash should return true"
- );
+ assert!(
+ queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(),
+ "Valid index with matching hash should return true"
+ );
+ assert!(
+ queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(),
+ "Second element with matching hash should return true"
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| assert_eq!( | |
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | |
| true, | |
| "Valid index with matching hash should return true" | |
| ); | |
| assert!( | |
| queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), | |
| "Valid index with matching hash should return true" | |
| ); | |
| assert!( | |
| queue_ref.prove_inclusion_by_index(1, &test_hash_2).unwrap(), | |
| "Second element with matching hash should return true" | |
| ); |
🤖 Prompt for AI Agents
In `@program-libs/batched-merkle-tree/tests/queue_ref.rs` around lines 74 - 78,
Replace the boolean equality assertions with idiomatic boolean assertions:
change uses of assert_eq!(queue_ref.prove_inclusion_by_index(0,
&test_hash_1).unwrap(), true, "...") and the similar assertions on lines 79-83
to assert!(queue_ref.prove_inclusion_by_index(0, &test_hash_1).unwrap(), "...")
(and corresponding calls for other test hashes), keeping the same message but
using assert! for conciseness; locate the calls to
queue_ref.prove_inclusion_by_index and update them accordingly.
Summary by CodeRabbit
New Features
Refactor
Tests