Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughRefactors client interfaces and RPC account-loading: removes Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Rpc as Rpc impl
participant Indexer as Compressed Indexer
participant Store as On-chain Store
Client->>Rpc: get_account_interface(pubkey)
Rpc->>Indexer: lookup compressed_accounts by pubkey
Indexer-->>Rpc: compressed account found?/none
alt compressed account found
Rpc->>Store: reconstruct AccountInterface (cold token/mint/account)
Store-->>Rpc: AccountInterface
else not found
Rpc->>Indexer: derive PDA seed & lookup
Indexer-->>Rpc: PDA result?/none
alt PDA token found
Rpc->>Store: build CompressedTokenAccount -> AccountInterface::cold_token (use ata_owner_map)
Store-->>Rpc: AccountInterface::cold_token
else
Rpc->>Indexer: get_compressed_token_accounts_by_owner(owner)
Indexer-->>Rpc: maybe single result
Rpc-->>Client: AccountInterface::cold_token or NotFound/Error
end
end
Rpc-->>Client: AccountInterface (hot or cold, with wallet_owner if ATA)
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/interface/account_interface.rs`:
- Around line 408-429: The cold branch in TryFrom<AccountInterface> for
TokenAccountInterface currently uses ct.token.owner (ct.token.owner) instead of
parsing the PodAccount from ai.account.data like the hot path does, which can
drop owner_override and break ata_bump() that relies on parsed.owner; change the
cold path to parse the PodAccount (same parsing logic used in the None/hot
branch) from ai.account.data and use parsed.owner when constructing
TokenAccountInterface::cold (returning the parsed PodAccount) instead of using
ct.token.owner.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
sdk-libs/program-test/src/program_test/rpc.rs (1)
696-803: 🧹 Nitpick | 🔵 TrivialFour-pass lookup with progressive fallback is well-structured but has two concerns.
1. Sequential async calls in pass four (lines 786-803):
Each remaining address triggers an individualget_compressed_token_accounts_by_ownercall in a serial loop. For large batches with many misses, this serializes latency. In program-test this is tolerable, but if this pattern gets copied to production RPC (photon_indexer), it would be a bottleneck.2. Silent swallow of ambiguous results (line 795):
Same issue asget_account_interface— whenitems.len() > 1, the address silently staysNone. Consider logging or erroring for consistency withget_token_account_interface.3. Duplicated token account construction logic:
The pattern of buildingCompressedTokenAccount { token, account }from indexer results appears in lines 443-446, 525-528, 770-773 (and equivalently inget_token_account_interface). Extracting a helper would reduce the duplication:Helper to reduce duplication
fn compressed_token_from_indexer_result( token_acc: &TokenAccountWithContext, // adjust type to match ) -> Result<CompressedTokenAccount, RpcError> { let compressed_account: CompressedAccount = token_acc .compressed_account .clone() .try_into() .map_err(|e| RpcError::CustomError(format!("conversion error: {:?}", e)))?; Ok(CompressedTokenAccount { token: token_acc.token_data.clone(), account: compressed_account, }) }
🤖 Fix all issues with AI agents
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 786-803: The fourth pass loop that calls
get_compressed_token_accounts_by_owner on owner_lookup_pubkeys silently leaves
results[original_idx] as None when items.len() > 1, which diverges from
get_token_account_interface/get_account_interface behavior; update the loop
(referencing owner_lookup_pubkeys, owner_lookup_indices,
get_compressed_token_accounts_by_owner, AccountInterface::cold_token, and
results) to detect the ambiguous case (items.len() > 1) and surface an error
instead of dropping the result — either mirror get_token_account_interface’s
error return for ambiguous lookups (return an RpcError with a clear message) or
collect/record partial errors for batch responses so callers are notified of
ambiguous token lookups.
- Around line 448-455: The code passes *address as wallet_owner into
AccountInterface::cold_token (setting PodAccount.owner = address) which is
correct for program-owned token accounts but is a best-effort default and
incorrect for associated token accounts (ATAs) because ata_bump() will fail when
the wallet owner does not equal the ATA pubkey; add a doc comment above the
token lookup paths (where AccountInterface::cold_token is used) explaining this
limitation and recommending callers use get_associated_token_account_interface
when they need correct ATA semantics, and mention ata_bump(), PodAccount.owner,
and AccountInterface::cold_token for clarity.
- Around line 429-471: get_account_interface currently treats multiple matching
token accounts as a non-error and returns None, while
get_token_account_interface returns RpcError::CustomError on ambiguity; update
the fourth-try branch in get_account_interface to mirror
get_token_account_interface by returning an explicit RpcError::CustomError when
result.value.items.len() > 1 (and keep the existing None/Ok behavior for
zero/one matches), or at minimum emit a warning via the logger before returning
None; locate the logic around get_compressed_token_accounts_by_owner and the
handling of items (the block that constructs AccountInterface::cold_token) and
change the ambiguous-case handling to return an RpcError::CustomError with a
clear message about ambiguous token accounts.
|
|
||
| // Third try: lookup in token_compressed_accounts | ||
| // Try by onchain_pubkey discriminator, PDA seed, then by token_data.owner | ||
| let token_acc = indexer | ||
| .find_token_account_by_onchain_pubkey(&address.to_bytes()) | ||
| .or_else(|| indexer.find_token_account_by_pda_seed(&address.to_bytes())); | ||
|
|
||
| if let Some(token_acc) = token_acc { | ||
| let compressed_account: CompressedAccount = token_acc | ||
| .compressed_account | ||
| .clone() | ||
| .try_into() | ||
| .map_err(|e| RpcError::CustomError(format!("conversion error: {:?}", e)))?; | ||
|
|
||
| let compressed_token = CompressedTokenAccount { | ||
| token: token_acc.token_data.clone(), | ||
| account: compressed_account, | ||
| }; | ||
|
|
||
| return Ok(Response { | ||
| context: Context { slot }, | ||
| value: Some(AccountInterface::cold_token( | ||
| *address, | ||
| compressed_token, | ||
| *address, | ||
| )), | ||
| }); | ||
| } | ||
|
|
||
| // Fourth try: lookup by token_data.owner (for program-owned tokens where owner == pubkey) | ||
| let result = indexer | ||
| .get_compressed_token_accounts_by_owner(address, None, None) | ||
| .await | ||
| .map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?; | ||
|
|
||
| let items = result.value.items; | ||
| if items.len() == 1 { | ||
| let token_acc = items.into_iter().next().unwrap(); | ||
| return Ok(Response { | ||
| context: Context { slot }, | ||
| value: Some(AccountInterface::cold_token(*address, token_acc, *address)), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Inconsistent ambiguity handling between get_account_interface and get_token_account_interface.
The fourth try (lines 464-471) silently returns None when multiple token accounts match, whereas get_token_account_interface (line 548-554) returns an explicit RpcError::CustomError for the same condition. This inconsistency can mask bugs — a caller might assume the account doesn't exist when it's actually ambiguous.
At minimum, log a warning or match the error behavior:
Suggested fix: log or error on ambiguous results
let items = result.value.items;
if items.len() == 1 {
let token_acc = items.into_iter().next().unwrap();
return Ok(Response {
context: Context { slot },
value: Some(AccountInterface::cold_token(*address, token_acc, *address)),
});
+ } else if items.len() > 1 {
+ // Match get_token_account_interface behavior
+ return Err(RpcError::CustomError(format!(
+ "Ambiguous lookup: found {} compressed token accounts for address {}. \
+ Use get_compressed_token_accounts_by_owner for multiple accounts.",
+ items.len(),
+ address
+ )));
}🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 429 - 471,
get_account_interface currently treats multiple matching token accounts as a
non-error and returns None, while get_token_account_interface returns
RpcError::CustomError on ambiguity; update the fourth-try branch in
get_account_interface to mirror get_token_account_interface by returning an
explicit RpcError::CustomError when result.value.items.len() > 1 (and keep the
existing None/Ok behavior for zero/one matches), or at minimum emit a warning
via the logger before returning None; locate the logic around
get_compressed_token_accounts_by_owner and the handling of items (the block that
constructs AccountInterface::cold_token) and change the ambiguous-case handling
to return an RpcError::CustomError with a clear message about ambiguous token
accounts.
| // Fourth pass: search by token_data.owner (for program-owned tokens) | ||
| for (i, pubkey) in owner_lookup_pubkeys.iter().enumerate() { | ||
| let original_idx = owner_lookup_indices[i]; | ||
| let result = indexer | ||
| .get_compressed_token_accounts_by_owner(pubkey, None, None) | ||
| .await | ||
| .map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?; | ||
|
|
||
| let items = result.value.items; | ||
| if items.len() == 1 { | ||
| let token_acc = items.into_iter().next().unwrap(); | ||
| results[original_idx] = Some(AccountInterface::cold_token( | ||
| *addresses[original_idx], | ||
| token_acc, | ||
| *addresses[original_idx], | ||
| )); | ||
| } | ||
| } |
There was a problem hiding this comment.
Fourth pass silently drops ambiguous token lookups — diverges from get_token_account_interface.
Same issue noted in get_account_interface's fourth try. When items.len() > 1, the result stays None rather than returning an error. For batch lookups, a silent drop can be especially confusing since the caller has no signal that the address was found but ambiguous.
At minimum, match get_token_account_interface's behavior or collect these as partial errors in the response.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 786 - 803, The
fourth pass loop that calls get_compressed_token_accounts_by_owner on
owner_lookup_pubkeys silently leaves results[original_idx] as None when
items.len() > 1, which diverges from
get_token_account_interface/get_account_interface behavior; update the loop
(referencing owner_lookup_pubkeys, owner_lookup_indices,
get_compressed_token_accounts_by_owner, AccountInterface::cold_token, and
results) to detect the ambiguous case (items.len() > 1) and surface an error
instead of dropping the result — either mirror get_token_account_interface’s
error return for ambiguous lookups (return an RpcError with a clear message) or
collect/record partial errors for batch responses so callers are notified of
ambiguous token lookups.
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
sdk-libs/client/src/interface/account_interface.rs (1)
196-206: 🧹 Nitpick | 🔵 TrivialConsider guarding
as_mint()against false positives when matchingColdContext::Account.The Mint struct contains an
account_typediscriminator field (value 1) at byte offset 165. Whenas_mint()now matchesColdContext::Account, it attempts Borsh deserialization on any compressed account without validating this field first. If a non-mint account's bytes happen to deserialize successfully as Mint, callers receive garbage data wrapped inSome(...).While Borsh's layout strictness makes this unlikely in practice, the
CompressedAccountDataalready carries discriminator information that should be validated before deserialization—consistent with the codebase's emphasis on discriminator checks for account type safety.sdk-libs/program-test/src/program_test/rpc.rs (1)
688-847: 🧹 Nitpick | 🔵 TrivialSignificant duplication between single and batch account-interface lookups.
The four-pass cascade in
get_multiple_account_interfaces(hot → onchain_pubkey → PDA seed → token onchain/PDA → token owner) is a structural duplicate ofget_account_interface. Any future bug fix or behavior change (like the ambiguity handling above) must be synchronized across both methods.Consider extracting the single-address resolution logic into a shared helper and having the batch method call it per-address (or better, batch the indexer calls where possible). This would eliminate the synchronization burden and reduce the surface area for inconsistencies.
programs/compressed-token/program/docs/EXTENSIONS.md (2)
7-7:⚠️ Potential issue | 🟡 MinorDoc says "5 restricted extensions" but the code now has 6.
RESTRICTED_EXTENSION_TYPESintoken_2022_extensions.rsis now[ExtensionType; 6]withMintCloseAuthorityadded. This sentence should read "6 restricted extensions".📝 Proposed fix
-The compressed token program supports 16 Token-2022 extension types. **5 restricted extensions** require instruction-level validation checks. Pure mint extensions (metadata, group, etc.) are allowed without explicit instruction support. +The compressed token program supports 16 Token-2022 extension types. **6 restricted extensions** require instruction-level validation checks. Pure mint extensions (metadata, group, etc.) are allowed without explicit instruction support.As per coding guidelines: "Verify that all function signatures, struct definitions, and behavior described in the documentation accurately match the actual implementation."
34-51:⚠️ Potential issue | 🟡 MinorMintCloseAuthority is missing from the quick reference table and the
MintExtensionFlagsstruct listing.Three documentation gaps for the newly-added restricted extension:
- Quick reference table (lines 34–51): No column for MintCloseAuthority. Other restricted extensions each have a column showing per-instruction behavior.
- MintExtensionFlags struct (lines 368–377): Missing the
has_mint_close_authority: boolfield that now exists in code.- Restricted Extensions section: Sections 1–5 cover Pausable, PermanentDelegate, TransferFeeConfig, TransferHook, and DefaultAccountState. There's no "### 6. MintCloseAuthority" section documenting its constraint, validation path, and unchecked instructions.
As per coding guidelines: "Verify that all function signatures, struct definitions, and behavior described in the documentation accurately match the actual implementation" and "Flag any references to deprecated functions, renamed structs, or changed interfaces."
Also applies to: 368-377
sdk-libs/macros/src/light_pdas/accounts/variant.rs (1)
186-227:⚠️ Potential issue | 🟠 MajorPinocchio packed seeds have mismatched Pubkey types for data-rooted pubkey seeds.
For pinocchio,
generate_seeds_struct_with_backendemits[u8; 32]for all pubkey fields, butgenerate_packed_seeds_struct_with_backendstill usesPubkeyfor non-account, non-numeric seeds frompacked_field_type. The pack impl then assigns a[u8; 32]into aPubkeyfield, which won't compile for data-rooted pubkey seeds. Align the packed seed field type with the pinocchio representation.🛠️ Proposed fix (pinocchio packed seed field type alignment)
- let ty = &sf.packed_field_type; - quote! { pub `#name`: `#ty` } + let ty = if backend.is_pinocchio() && !sf.is_account_seed && !sf.has_le_bytes { + quote! { [u8; 32] } + } else { + sf.packed_field_type.clone() + }; + quote! { pub `#name`: `#ty` }
🤖 Fix all issues with AI agents
In `@programs/compressed-token/anchor/src/lib.rs`:
- Around line 72-74: The guard in the function that checks token_pool_index
(using token_pool_index and NUM_MAX_POOL_ACCOUNTS) correctly rejects index==0
but returns ErrorCode::InvalidTokenPoolBump whose #[msg(...)] text currently
reads "Max number of token pools reached" and is misleading; update the error
message for InvalidTokenPoolBump to a neutral, accurate phrase such as "Token
pool index out of valid range" so both the index==0 and
index>=NUM_MAX_POOL_ACCOUNTS cases are correctly described (do not add a new
enum variant to avoid shifting discriminants), and ensure references in
add_token_pool and create_token_pool comments/tests reflect the updated message.
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs`:
- Around line 98-102: The code currently calls
mint_checks.enforce_extension_state() when decompress_inputs.is_none(), which
enforces extension-state on decompress paths and contradicts EXTENSIONS.md;
update the logic so that mint_checks.enforce_extension_state() is NOT called for
decompress operations — i.e., skip/remove the enforce_extension_state() call
whenever the operation is a decompress (use the existing decompress_inputs flag
or the function's decompress-detection logic in
compress_or_decompress_ctokens.rs to detect decompress) and only enforce
extension state for non-decompress (mint/other) flows.
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/decompress.rs`:
- Around line 118-138: The code silently drops delegated_amount when an existing
ctoken.delegate() differs from the input_delegate; update the branch that
computes delegate_is_set (the if-let on ctoken.delegate() and the pubkey_eq
check) to emit a diagnostic msg!() warning when a mismatch is detected (include
the existing delegate and input_delegate pubkeys and the delegated_amount)
before returning/continuing, so callers can see when delegate state was skipped;
alternatively, if silent behavior must remain, add a concise inline comment next
to delegate_is_set explaining why differing delegates are intentionally ignored.
In `@programs/compressed-token/program/src/ctoken/transfer/checked.rs`:
- Around line 53-57: The early return from validate_self_transfer(source,
destination, &accounts[ACCOUNT_AUTHORITY])? currently skips all extension
processing (pausable and compressible top-up); instead, before returning for a
self-transfer, explicitly run the same pausable check and compressible top-up
logic used for normal transfers (i.e., call the pausable-extension validator and
the compressible top-up routine you use elsewhere) and only return Ok(()) if
those checks succeed or decide to no-op; alternatively, if the intended
semantics are to allow silent no-ops, add a clear comment documenting that
self-transfers intentionally bypass pausable and top-up checks.
In `@programs/compressed-token/program/tests/extensions_metadata.rs`:
- Around line 68-72: The randomized test always sets RemoveMetadataKeyAction {
idempotent: 1 }, so the non-idempotent branch is never exercised; add
deterministic coverage for idempotent: 0 by creating explicit test cases that
(a) perform a RemoveMetadataKeyAction with idempotent: 0 against a known-present
key and assert success, and (b) perform a RemoveMetadataKeyAction with
idempotent: 0 against a known-absent key and assert the expected error; locate
the test helper/fixture that seeds extensions and metadata (where
random_key(rng) and actions.push(Action::RemoveMetadataKey(...)) are used) and
either (1) add two new tests using RemoveMetadataKeyAction with idempotent: 0
and fixed keys, or (2) extend the generator to sometimes emit idempotent: 0 with
a controlled non-random case so both present and absent key behaviors are
validated.
- Around line 200-253: The test never covers empty name/symbol/uri because
random_value() always returns at least one byte; update the randomized test to
occasionally generate empty metadata fields so
process_extensions_config_with_actions and compute_expected_config are exercised
with zero-length values. Concretely, inside
test_metadata_config_with_actions_randomized change how TokenMetadata.name,
symbol, and uri are populated (or adjust random_value()) to sometimes return an
empty Vec/empty value (e.g., with a small probability based on the StdRng) so
some iterations pass empty metadata into TokenMetadata and the rest remain
randomized.
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs`:
- Around line 531-547: The is_packed_token implementation in the impl of
HasTokenVariant for LightAccountData currently matches LightAccountVariant and
uses a wildcard arm that is unreachable for mint-only builds; replace the match
with a deterministic false return (i.e., make LightAccountData::is_packed_token
always return false) so it matches the PDA-only path and does not imply token
variants exist; update the impl inside mod __trait_impls for
LightAccountData/LightAccountVariant accordingly.
- Around line 68-85: The function generate_light_program_items_with_backend has
too many parameters; create a CodegenConfig struct (e.g., pub(crate) struct
CodegenConfig { compressible_accounts: Vec<CompressibleAccountInfo>, pda_seeds:
Option<Vec<TokenSeedSpec>>, token_seeds: Option<Vec<TokenSeedSpec>>,
instruction_data: Vec<InstructionDataSpec>, crate_ctx:
crate::light_pdas::parsing::CrateContext /*or &CrateContext as appropriate*/,
has_mint_fields: bool, has_ata_fields: bool, pda_variant_code: TokenStream,
enum_name: Option<syn::Ident> }) and replace the long parameter list with a
single config: &CodegenConfig in generate_light_program_items_with_backend;
update callers generate_light_program_items and
generate_light_program_pinocchio_items to construct and pass the config
(preserving ownership/borrowing semantics), adjust any uses inside the body to
read fields from config, and remove or keep the clippy allow as desired to
reduce boolean-swapping risk and simplify call sites.
- Around line 549-561: The #[allow(unused_variables)] on the 9-tuple
destructuring is too broad and may hide real unused-variable warnings; change
the binding so the Anchor (non-pinocchio) branch uses a normal destructure
without the crate-level allow and the Pinocchio branch either returns a tuple of
underscore-prefixed names or is destructured into a separately annotated
binding—specifically adjust the let that creates (compress_accounts,
compress_dispatch_fn, compress_processor_fn, compress_instruction,
processor_module, init_config_accounts, update_config_accounts,
init_config_instruction, update_config_instruction) and the conditional using
backend.is_pinocchio() so only the variables that are legitimately unused in the
Pinocchio path are named with leading underscores (or the Pinocchio result is
captured in a separate let with #[allow(unused_variables)]) rather than applying
#[allow(unused_variables)] to the whole tuple.
- Around line 191-193: The generated method compression_info_mut_opt on
LightAccountVariant currently panics; instead provide a non-panicking sentinel
to return a &mut Option<#account_crate::CompressionInfo>. Add a static mutable
sentinel (e.g. static mut COMPRESSION_INFO_SENTINEL:
Option<#account_crate::CompressionInfo> = None) and return a mutable reference
to it (using an unsafe block) from compression_info_mut_opt, and document the
sentinel usage; alternatively add a field on LightAccountVariant to hold an
Option<#account_crate::CompressionInfo> and return a mutable reference to that
field. Ensure the symbol names compression_info_mut_opt, LightAccountVariant and
`#account_crate`::CompressionInfo are used so callers no longer panic.
- Around line 686-700: The Anchor path hardcodes
AnchorSerialize/AnchorDeserialize and uses `#account_crate`::RentConfig, which
breaks the backend abstraction; update the Anchor branch that emits the
InitConfigParams struct to use the backend-provided derive tokens
(`#serialize_derive` and `#deserialize_derive`) instead of the literal
AnchorSerialize/AnchorDeserialize and replace the direct rent type with a
backend-provided rent_config_type() (or equivalent trait accessor) so both
Pinocchio and Anchor paths use the same backend hooks and consistent re-exported
rent type.
- Around line 811-814: Replace the hardcoded 8-byte discriminators in
instructions.rs with the single source-of-truth constants from the SDK client
(e.g., use INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR,
UPDATE_COMPRESSION_CONFIG_DISCRIMINATOR,
COMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR,
DECOMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR from
sdk-libs/client::interface::instructions) by importing them and referencing
those names instead of the literal arrays; if importing is not possible, add a
compile-time assertion (e.g., const assert_eq!) in the same module to verify the
local arrays equal the SDK constants so divergence fails the build.
In `@sdk-libs/program-test/src/compressible.rs`:
- Around line 231-253: Add a defensive check so missing indexer is surfaced
instead of silently skipping ATA owner recording: before iterating or at the
point you would insert into indexer.ata_owner_map, assert or warn on rpc.indexer
being None (e.g., debug_assert!(rpc.indexer.is_some()) or log::warn! with
context when rpc.indexer.is_none()) and only then call rpc.indexer.as_mut() to
insert; reference rpc.indexer, indexer.ata_owner_map, and the loop that inspects
Token::zero_copy_at / get_compressible_extension so the check is colocated with
the ATA handling.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 800-816: The formatting of the `wallet_owner` assignment (the
chain starting with `indexer.ata_owner_map.get(addr).copied().unwrap_or(*addr)`)
does not match rustfmt; run `cargo fmt` and reflow that chain to match the
project's rustfmt rules (either a single-line call or properly wrapped chained
calls) so the assignment to `wallet_owner` in the block that builds
`compressed_token` and sets `results[original_idx] =
Some(AccountInterface::cold_token(*addr, compressed_token, wallet_owner))` is
formatted consistently with the rest of the file.
- Around line 824-829: The loop calling
indexer.get_compressed_token_accounts_by_owner sequentially over
owner_lookup_pubkeys should be converted to parallel execution (e.g., collect
futures into a FuturesUnordered or use futures::future::join_all) so the async
calls run concurrently; create a future for each (pubkey, original_idx) that
performs the .await and map_err, await the collection of results, then iterate
the completed results to restore ordering using original_idx and populate
whatever consumer used `result` before—refer to owner_lookup_pubkeys,
owner_lookup_indices, indexer.get_compressed_token_accounts_by_owner and the
result handling in the loop to locate where to replace the sequential await with
a batched/parallel await.
| if token_pool_index == 0 || token_pool_index >= NUM_MAX_POOL_ACCOUNTS { | ||
| return err!(ErrorCode::InvalidTokenPoolBump); | ||
| } |
There was a problem hiding this comment.
Good guard — but the error variant/message is misleading for the index == 0 case.
Rejecting token_pool_index == 0 here is the right call since pool 0 is created via create_token_pool. However, InvalidTokenPoolBump carries the message "Max number of token pools reached." (line 291), which doesn't describe an index-0 rejection at all. A caller debugging a failed add_token_pool(0) would see a confusing error.
Consider either:
- Introducing a distinct error variant for the index-0 case (e.g.,
TokenPoolIndexZeroReserved), or - At minimum, updating the
#[msg(...)]onInvalidTokenPoolBumpto be accurate for both rejection paths (e.g., "Token pool index out of valid range").
Since this is an on-chain program and adding a new variant shifts subsequent discriminant values, option 2 (just rewording the message) is the safer path if the IDL is already published.
🤖 Prompt for AI Agents
In `@programs/compressed-token/anchor/src/lib.rs` around lines 72 - 74, The guard
in the function that checks token_pool_index (using token_pool_index and
NUM_MAX_POOL_ACCOUNTS) correctly rejects index==0 but returns
ErrorCode::InvalidTokenPoolBump whose #[msg(...)] text currently reads "Max
number of token pools reached" and is misleading; update the error message for
InvalidTokenPoolBump to a neutral, accurate phrase such as "Token pool index out
of valid range" so both the index==0 and index>=NUM_MAX_POOL_ACCOUNTS cases are
correctly described (do not add a new enum variant to avoid shifting
discriminants), and ensure references in add_token_pool and create_token_pool
comments/tests reflect the updated message.
| if decompress_inputs.is_none() { | ||
| if let Some(ref checks) = mint_checks { | ||
| checks.enforce_extension_state()?; | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the compress_or_decompress_ctokens function and understand the decompress_inputs logic
rg -n "fn compress_or_decompress_ctokens" --type=rust -A 30Repository: Lightprotocol/light-protocol
Length of output: 4893
🏁 Script executed:
# Look for where decompress_inputs is set and how it's passed
rg -n "decompress_inputs" --type=rust -B 5 -A 5Repository: Lightprotocol/light-protocol
Length of output: 15411
🏁 Script executed:
# Find and examine EXTENSIONS.md documentation
fd -t f "EXTENSIONS.md" -o "extensions.md"Repository: Lightprotocol/light-protocol
Length of output: 217
🏁 Script executed:
# Check for CompressAndClose related logic
rg -n "CompressAndClose\|compress_and_close" --type=rust -C 3Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Find EXTENSIONS.md properly
find . -type f -name "*EXTENSION*" -o -name "*extension*"Repository: Lightprotocol/light-protocol
Length of output: 1238
🏁 Script executed:
# Check the caller code to understand when decompress_inputs is None vs Some
# Look at how decompress_inputs is constructed before calling compress_or_decompress_ctokens
rg -n "from_compression" --type=rust -B 3 -A 10Repository: Lightprotocol/light-protocol
Length of output: 3894
🏁 Script executed:
# Search for documentation about decompress behavior in docs or CLAUDE.md files
fd -t f "CLAUDE.md" | head -20Repository: Lightprotocol/light-protocol
Length of output: 684
🏁 Script executed:
# Read EXTENSIONS.md to check line 41 and understand decompress behavior claims
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | head -60Repository: Lightprotocol/light-protocol
Length of output: 4466
🏁 Script executed:
# Search for build_mint_extension_cache to find the docstring mentioned in review
rg -n "build_mint_extension_cache" --type=rust -B 3 -A 10Repository: Lightprotocol/light-protocol
Length of output: 12575
🏁 Script executed:
# Trace where decompress_inputs is set to None or Some - find the caller of process_ctoken_compression
rg -n "process_ctoken_compression" --type=rust -B 5 -A 5Repository: Lightprotocol/light-protocol
Length of output: 7310
🏁 Script executed:
# Look at the full context around lines 98-102 to understand both branches
cat -n programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs | sed -n '97,120p'Repository: Lightprotocol/light-protocol
Length of output: 1069
🏁 Script executed:
# Check what enforce_extension_state does and how it blocks operations
rg -n "enforce_extension_state" --type=rust -B 2 -A 10Repository: Lightprotocol/light-protocol
Length of output: 5202
🏁 Script executed:
# Look at the full EXTENSIONS.md to understand the decompress vs C&C distinction better
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | sed -n '41,60p'Repository: Lightprotocol/light-protocol
Length of output: 2028
🏁 Script executed:
# Check if there's documentation about CompressAndClose (C&C) vs plain decompress behavior difference
rg -n "compress.*only\|CompressAndClose\|CompressOnly" --type=rust programs/compressed-token/program/docs/EXTENSIONS.md -i -C 2Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check CLAUDE.md files for compression-related guidance
cat programs/compressed-token/program/CLAUDE.md | grep -i "decompress\|compress" -A 3 -B 1Repository: Lightprotocol/light-protocol
Length of output: 9429
🏁 Script executed:
# Look for any recent documentation about the two decompress paths
rg -n "decompress_inputs\|CompressAndClose" --type=rust programs/compressed-token/program/docs/ -C 2Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check the exact logic: when is decompress_inputs None vs Some in the parent caller?
cat -n programs/compressed-token/program/src/compressed_token/transfer2/compression/mod.rs | sed -n '88,110p'Repository: Lightprotocol/light-protocol
Length of output: 1226
🏁 Script executed:
# Check if there's any specific guidance about decompress behavior in compressed-token CLAUDE.md
cat programs/compressed-token/program/CLAUDE.md | grep -i "bypass\|restrict\|plain\|decompress" -A 2 -B 2Repository: Lightprotocol/light-protocol
Length of output: 4957
🏁 Script executed:
# Look at the full EXTENSIONS.md to see if there's more context about the two paths
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | sed -n '1,100p'Repository: Lightprotocol/light-protocol
Length of output: 6619
🏁 Script executed:
# Check DecompressCompressOnlyInputs to understand what marks a decompress as "with compress only inputs"
rg -n "struct DecompressCompressOnlyInputs\|try_extract" --type=rust programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/inputs.rs -A 15Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Find DecompressCompressOnlyInputs struct definition
rg -n "struct DecompressCompressOnlyInputs" --type=rust -A 20Repository: Lightprotocol/light-protocol
Length of output: 3133
🏁 Script executed:
# Search for try_extract to understand when it returns Some vs None
rg -n "fn try_extract" --type=rust programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/ -A 20Repository: Lightprotocol/light-protocol
Length of output: 2939
Decompress extension-state enforcement contradicts EXTENSIONS.md documentation.
Lines 98-102 enforce mint_checks.enforce_extension_state() when decompress_inputs.is_none(), but EXTENSIONS.md line 41 documents that decompress should allow all restricted extensions unconditionally (line 66: "allowed = Bypasses extension state checks"). The distinction is between decompress operations that consume an input compressed account (decompress_inputs.is_some(), which bypass checks) versus plain decompress without a consumed input (decompress_inputs.is_none(), which currently enforces checks). Either the documentation should clarify this split behavior, or the enforcement should be removed to match the stated decompress bypass guarantee.
🤖 Prompt for AI Agents
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs`
around lines 98 - 102, The code currently calls
mint_checks.enforce_extension_state() when decompress_inputs.is_none(), which
enforces extension-state on decompress paths and contradicts EXTENSIONS.md;
update the logic so that mint_checks.enforce_extension_state() is NOT called for
decompress operations — i.e., skip/remove the enforce_extension_state() call
whenever the operation is a decompress (use the existing decompress_inputs flag
or the function's decompress-detection logic in
compress_or_decompress_ctokens.rs to detect decompress) and only enforce
extension state for non-decompress (mint/other) flows.
| let Some(delegate_acc) = input_delegate else { | ||
| return Ok(()); | ||
| }; | ||
|
|
||
| let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() { | ||
| pubkey_eq(existing_delegate.array_ref(), delegate_acc.key()) | ||
| } else { | ||
| ctoken | ||
| .base | ||
| .set_delegate(Some(Pubkey::from(*delegate_acc.key())))?; | ||
| if delegated_amount > 0 { | ||
| let current = ctoken.base.delegated_amount.get(); | ||
| ctoken.base.delegated_amount.set( | ||
| current | ||
| .checked_add(delegated_amount) | ||
| .ok_or(ProgramError::ArithmeticOverflow)?, | ||
| ); | ||
| } | ||
| } else if delegated_amount > 0 { | ||
| msg!("Decompress: delegated_amount > 0 but no delegate"); | ||
| return Err(TokenError::DecompressDelegatedAmountWithoutDelegate.into()); | ||
| true | ||
| }; | ||
|
|
||
| if delegate_is_set && delegated_amount > 0 { | ||
| let current = ctoken.base.delegated_amount.get(); | ||
| ctoken.base.delegated_amount.set( | ||
| current | ||
| .checked_add(delegated_amount) | ||
| .ok_or(ProgramError::ArithmeticOverflow)?, | ||
| ); | ||
| } |
There was a problem hiding this comment.
Silent delegate mismatch during decompress — verify this is the intended design.
When the CToken already has a delegate that differs from the input delegate, delegate_is_set evaluates to false and the input's delegated_amount is silently dropped. The EXTENSIONS.md doc confirms this is by design ("If existing delegate differs: skip, silent ignore"), but the consequence is real:
If two compressed tokens with different delegates are decompressed into the same CToken, the second token's delegate and delegated_amount are silently lost. This could happen if a client batches multiple decompress operations targeting one CToken.
Consider whether this should at least emit a warning msg!() or return an error instead of silently discarding delegate state. If this is truly the desired behavior, a brief inline comment explaining why it's safe to drop the delegate info would help future readers.
💡 Suggestion: add a diagnostic message or comment
let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() {
- pubkey_eq(existing_delegate.array_ref(), delegate_acc.key())
+ let matches = pubkey_eq(existing_delegate.array_ref(), delegate_acc.key());
+ if !matches {
+ msg!("Decompress: input delegate differs from existing, skipping delegation update");
+ }
+ matches
} else {
ctoken
.base
.set_delegate(Some(Pubkey::from(*delegate_acc.key())))?;
true
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let Some(delegate_acc) = input_delegate else { | |
| return Ok(()); | |
| }; | |
| let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() { | |
| pubkey_eq(existing_delegate.array_ref(), delegate_acc.key()) | |
| } else { | |
| ctoken | |
| .base | |
| .set_delegate(Some(Pubkey::from(*delegate_acc.key())))?; | |
| if delegated_amount > 0 { | |
| let current = ctoken.base.delegated_amount.get(); | |
| ctoken.base.delegated_amount.set( | |
| current | |
| .checked_add(delegated_amount) | |
| .ok_or(ProgramError::ArithmeticOverflow)?, | |
| ); | |
| } | |
| } else if delegated_amount > 0 { | |
| msg!("Decompress: delegated_amount > 0 but no delegate"); | |
| return Err(TokenError::DecompressDelegatedAmountWithoutDelegate.into()); | |
| true | |
| }; | |
| if delegate_is_set && delegated_amount > 0 { | |
| let current = ctoken.base.delegated_amount.get(); | |
| ctoken.base.delegated_amount.set( | |
| current | |
| .checked_add(delegated_amount) | |
| .ok_or(ProgramError::ArithmeticOverflow)?, | |
| ); | |
| } | |
| let Some(delegate_acc) = input_delegate else { | |
| return Ok(()); | |
| }; | |
| let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() { | |
| let matches = pubkey_eq(existing_delegate.array_ref(), delegate_acc.key()); | |
| if !matches { | |
| msg!("Decompress: input delegate differs from existing, skipping delegation update"); | |
| } | |
| matches | |
| } else { | |
| ctoken | |
| .base | |
| .set_delegate(Some(Pubkey::from(*delegate_acc.key())))?; | |
| true | |
| }; | |
| if delegate_is_set && delegated_amount > 0 { | |
| let current = ctoken.base.delegated_amount.get(); | |
| ctoken.base.delegated_amount.set( | |
| current | |
| .checked_add(delegated_amount) | |
| .ok_or(ProgramError::ArithmeticOverflow)?, | |
| ); | |
| } |
🤖 Prompt for AI Agents
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/decompress.rs`
around lines 118 - 138, The code silently drops delegated_amount when an
existing ctoken.delegate() differs from the input_delegate; update the branch
that computes delegate_is_set (the if-let on ctoken.delegate() and the pubkey_eq
check) to emit a diagnostic msg!() warning when a mismatch is detected (include
the existing delegate and input_delegate pubkeys and the delegated_amount)
before returning/continuing, so callers can see when delegate state was skipped;
alternatively, if silent behavior must remain, add a concise inline comment next
to delegate_is_set explaining why differing delegates are intentionally ignored.
| // Self-transfer: validate authority but skip token movement to avoid | ||
| // double mutable borrow panic in pinocchio process_transfer. | ||
| if validate_self_transfer(source, destination, &accounts[ACCOUNT_AUTHORITY])? { | ||
| return Ok(()); | ||
| } |
There was a problem hiding this comment.
Self-transfer early exit bypasses extension checks (pausable, compressible top-up).
Returning Ok(()) here skips all downstream processing. Two things worth considering:
-
Pausable accounts: If the token account has a pausable extension and is currently paused, a self-transfer will silently succeed instead of being rejected. Depending on your semantics for "paused," this may be acceptable (no funds move) or a policy violation (paused means no transfer instructions should succeed).
-
Compressible top-up: Self-transfers won't trigger the rent top-up calculation/payment. If users rely on self-transfers to keep accounts alive, the top-up window could be missed.
If both are intentional (a self-transfer is truly a no-op and doesn't need guardrails), a brief comment documenting that decision would help future readers understand the choice.
🤖 Prompt for AI Agents
In `@programs/compressed-token/program/src/ctoken/transfer/checked.rs` around
lines 53 - 57, The early return from validate_self_transfer(source, destination,
&accounts[ACCOUNT_AUTHORITY])? currently skips all extension processing
(pausable and compressible top-up); instead, before returning for a
self-transfer, explicitly run the same pausable check and compressible top-up
logic used for normal transfers (i.e., call the pausable-extension validator and
the compressible top-up routine you use elsewhere) and only return Ok(()) if
those checks succeed or decide to no-op; alternatively, if the intended
semantics are to allow silent no-ops, add a clear comment documenting that
self-transfers intentionally bypass pausable and top-up checks.
| 2 => actions.push(Action::RemoveMetadataKey(RemoveMetadataKeyAction { | ||
| extension_index: 0, | ||
| key: random_key(rng), | ||
| idempotent: 1, | ||
| })), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
idempotent is always 1 — non-idempotent removal path is untested.
Every RemoveMetadataKey action sets idempotent: 1. If idempotent: 0 triggers an error when the key doesn't exist, that branch is never exercised here. Given the random nature of the key selection, a non-idempotent remove hitting a missing key would cause a hard-to-reproduce failure, so I understand the choice — but consider adding a dedicated (non-randomized) test case for idempotent: 0 with both present and absent keys to cover that path.
🤖 Prompt for AI Agents
In `@programs/compressed-token/program/tests/extensions_metadata.rs` around lines
68 - 72, The randomized test always sets RemoveMetadataKeyAction { idempotent: 1
}, so the non-idempotent branch is never exercised; add deterministic coverage
for idempotent: 0 by creating explicit test cases that (a) perform a
RemoveMetadataKeyAction with idempotent: 0 against a known-present key and
assert success, and (b) perform a RemoveMetadataKeyAction with idempotent: 0
against a known-absent key and assert the expected error; locate the test
helper/fixture that seeds extensions and metadata (where random_key(rng) and
actions.push(Action::RemoveMetadataKey(...)) are used) and either (1) add two
new tests using RemoveMetadataKeyAction with idempotent: 0 and fixed keys, or
(2) extend the generator to sometimes emit idempotent: 0 with a controlled
non-random case so both present and absent key behaviors are validated.
| pub const INITIALIZE_COMPRESSION_CONFIG: [u8; 8] = [133, 228, 12, 169, 56, 76, 222, 61]; | ||
| pub const UPDATE_COMPRESSION_CONFIG: [u8; 8] = [135, 215, 243, 81, 163, 146, 33, 70]; | ||
| pub const COMPRESS_ACCOUNTS_IDEMPOTENT: [u8; 8] = [70, 236, 171, 120, 164, 93, 113, 181]; | ||
| pub const DECOMPRESS_ACCOUNTS_IDEMPOTENT: [u8; 8] = [114, 67, 61, 123, 234, 31, 1, 112]; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for the discriminator arrays in the codebase
echo "=== Searching for discriminator values ==="
rg -n "133, 228, 12, 169, 56, 76, 222, 61|135, 215, 243, 81, 163, 146, 33, 70|70, 236, 171, 120, 164, 93, 113, 181|114, 67, 61, 123, 234, 31, 1, 112" --type rust
echo ""
echo "=== Searching for INITIALIZE_COMPRESSION_CONFIG constant ==="
rg -n "INITIALIZE_COMPRESSION_CONFIG|UPDATE_COMPRESSION_CONFIG|COMPRESS_ACCOUNTS_IDEMPOTENT|DECOMPRESS_ACCOUNTS_IDEMPOTENT" --type rust
echo ""
echo "=== Looking for light-client discriminator definitions ==="
fd -e rs -e md | xargs grep -l "light-client\|light_client" | head -10
echo ""
echo "=== Checking the file to understand context ==="
head -850 sdk-libs/macros/src/light_pdas/program/instructions.rs | tail -80Repository: Lightprotocol/light-protocol
Length of output: 9558
🏁 Script executed:
#!/bin/bash
# Look for CLAUDE.md or documentation about discriminators in light-protocol
echo "=== Checking for CLAUDE.md files with discriminator guidance ==="
find . -name "CLAUDE.md" -o -name "DISCRIMINATOR.md" | xargs grep -l "discriminator" 2>/dev/null | head -5
echo ""
echo "=== Examining light-protocol documentation ==="
if [ -f "program-libs/account-checks/docs/DISCRIMINATOR.md" ]; then
cat program-libs/account-checks/docs/DISCRIMINATOR.md
fi
echo ""
echo "=== Checking if SDK uses LightDiscriminator derive for similar structures ==="
rg -n "LightDiscriminator" --type rust -A 2 -B 2 | head -40Repository: Lightprotocol/light-protocol
Length of output: 10332
Hardcoded instruction discriminators should be imported from the SDK client, not duplicated.
These four 8-byte discriminators are defined as public constants in sdk-libs/client/src/interface/instructions.rs (e.g., INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR), but here they are hardcoded locally. While all instances currently match, duplicating these constants across the codebase creates a maintenance risk: if the SDK client constants ever change, the generated Pinocchio code will silently diverge unless manually updated.
Instead, import and reference the constants from the client SDK, or add a compile-time assertion verifying the hardcoded values match their source of truth.
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs` around lines 811 -
814, Replace the hardcoded 8-byte discriminators in instructions.rs with the
single source-of-truth constants from the SDK client (e.g., use
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR,
UPDATE_COMPRESSION_CONFIG_DISCRIMINATOR,
COMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR,
DECOMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR from
sdk-libs/client::interface::instructions) by importing them and referencing
those names instead of the literal arrays; if importing is not possible, add a
compile-time assertion (e.g., const assert_eq!) in the same module to verify the
local arrays equal the SDK constants so divergence fails the build.
| // Track ATA wallet owners before compression closes the on-chain accounts. | ||
| // For ATAs, the compressed token owner is the ATA pubkey (not the wallet owner), | ||
| // so we need to record the wallet owner for cold ATA lookups during decompression. | ||
| { | ||
| use light_zero_copy::traits::ZeroCopyAt; | ||
| for pubkey in compress_accounts_compression_only | ||
| .iter() | ||
| .chain(compress_accounts_normal.iter()) | ||
| { | ||
| if let Some(account) = rpc.context.get_account(pubkey) { | ||
| if let Ok((ctoken, _)) = Token::zero_copy_at(&account.data) { | ||
| if let Some(ext) = ctoken.get_compressible_extension() { | ||
| if ext.is_ata != 0 { | ||
| let wallet_owner = Pubkey::from(ctoken.owner.to_bytes()); | ||
| if let Some(indexer) = rpc.indexer.as_mut() { | ||
| indexer.ata_owner_map.insert(*pubkey, wallet_owner); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Silent no-op when rpc.indexer is None could hide bugs.
The if let Some(indexer) = rpc.indexer.as_mut() guard on line 245 means that if the indexer isn't configured, ATA wallet-owner tracking is silently skipped. Downstream cold-ATA lookups would then fail with no indication of why. In the LightProgramTest context the indexer should always be present, so a debug_assert! or log::warn! would catch misconfiguration early rather than manifesting as a mysterious decompression failure later.
Suggested defensive check
- if let Some(indexer) = rpc.indexer.as_mut() {
- indexer.ata_owner_map.insert(*pubkey, wallet_owner);
- }
+ if let Some(indexer) = rpc.indexer.as_mut() {
+ indexer.ata_owner_map.insert(*pubkey, wallet_owner);
+ } else {
+ log::warn!(
+ "claim_and_compress: indexer not set, ATA wallet owner for {} will not be tracked",
+ pubkey
+ );
+ }🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/compressible.rs` around lines 231 - 253, Add a
defensive check so missing indexer is surfaced instead of silently skipping ATA
owner recording: before iterating or at the point you would insert into
indexer.ata_owner_map, assert or warn on rpc.indexer being None (e.g.,
debug_assert!(rpc.indexer.is_some()) or log::warn! with context when
rpc.indexer.is_none()) and only then call rpc.indexer.as_mut() to insert;
reference rpc.indexer, indexer.ata_owner_map, and the loop that inspects
Token::zero_copy_at / get_compressible_extension so the check is colocated with
the ATA handling.
| for (i, pubkey) in owner_lookup_pubkeys.iter().enumerate() { | ||
| let original_idx = owner_lookup_indices[i]; | ||
| let result = indexer | ||
| .get_compressed_token_accounts_by_owner(pubkey, None, None) | ||
| .await | ||
| .map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Sequential .await in a loop — consider batching if this path is hit with many addresses.
Each iteration of this loop makes an individual async call to get_compressed_token_accounts_by_owner. For test scenarios with many unresolved addresses, this becomes O(n) sequential round-trips. Since TestIndexer is in-memory the latency is negligible today, but if this pattern ever gets reused for a real RPC backend, it'll be a bottleneck.
A FuturesUnordered / join_all approach would parallelize these, but I'd understand deferring that given the test-only context. Just worth keeping in mind.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 824 - 829, The
loop calling indexer.get_compressed_token_accounts_by_owner sequentially over
owner_lookup_pubkeys should be converted to parallel execution (e.g., collect
futures into a FuturesUnordered or use futures::future::join_all) so the async
calls run concurrently; create a future for each (pubkey, original_idx) that
performs the .await and map_err, await the collection of results, then iterate
the completed results to restore ordering using original_idx and populate
whatever consumer used `result` before—refer to owner_lookup_pubkeys,
owner_lookup_indices, indexer.get_compressed_token_accounts_by_owner and the
result handling in the loop to locate where to replace the sequential await with
a batched/parallel await.
8223cc1 to
5f5901c
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
programs/compressed-token/program/src/ctoken/transfer/default.rs (1)
46-57:⚠️ Potential issue | 🟠 MajorSelf-transfer check correctly placed before the hot path to prevent double mutable borrow panic.
The ordering here is important and correct: the self-transfer check must precede the 165-byte hot path because
process_transferwould panic onsource == destinationdue to a double mutable borrow. Thepubkey_eqcost is negligible so it doesn't hurt the hot path.That said, the same concern from
checked.rsapplies: this early exit bypasses pausable and compressible top-up checks. If the policy decision is that self-transfers are true no-ops that don't need those guardrails, adding a brief doc comment (e.g.,// Self-transfers are intentional no-ops; pausable/top-up checks are not needed because no state changes occur.) would help future readers understand the intent was deliberate, not accidental.Suggested documentation
// Self-transfer: validate authority but skip token movement to avoid // double mutable borrow panic in pinocchio process_transfer. + // Note: This intentionally skips pausable and compressible top-up checks + // because no token state changes occur on self-transfers. if validate_self_transfer(source, destination, &accounts[ACCOUNT_AUTHORITY])? { return Ok(()); }program-libs/token-interface/src/token_2022_extensions.rs (1)
150-160:⚠️ Potential issue | 🟡 MinorDoc comment on
has_restricted_extensionsis now stale — doesn't listMintCloseAuthority.The comment on lines 151-152 enumerates the restricted extensions but omits the newly added
MintCloseAuthority. Keeping this in sync avoids confusion for the next person who reads this.📝 Suggested fix
/// Returns true if mint has any restricted extensions. - /// Restricted extensions (Pausable, PermanentDelegate, TransferFee, TransferHook, - /// DefaultAccountState) require compression_only mode when compressing tokens. + /// Restricted extensions (Pausable, PermanentDelegate, TransferFee, TransferHook, + /// DefaultAccountState, MintCloseAuthority) require compression_only mode when compressing tokens. pub const fn has_restricted_extensions(&self) -> bool {programs/compressed-token/program/src/extensions/mod.rs (1)
140-148: 🧹 Nitpick | 🔵 TrivialDoc comment "without heap allocations" is inaccurate — line 234 collects into a
Vec.The
ArrayVecavoids repeated heap reallocations during building, butconfigs.into_iter().collect()on line 234 performs a final heap allocation. The comment could say "minimizing heap allocations" instead.📝 Suggested doc fix
-/// Build metadata config directly without heap allocations using ArrayVec +/// Build metadata config using stack-based ArrayVec to minimize heap allocations /// Processes all possible keys and determines final state (SPL Token-2022 compatible)programs/compressed-token/program/src/compressed_token/transfer2/check_extensions.rs (1)
121-146:⚠️ Potential issue | 🟠 MajorDecompress-mode compressions may be incorrectly blocked by the new restricted-extensions guard.
The new
else ifat lines 140–145 blocks any compression entry wherehas_restricted_extensions && deny_restricted_extensions && !is_compress_and_close(). Per the doc comment (lines 61–64) and EXTENSIONS.md (line 41), Decompress should bypass restricted-extension enforcement. But a Decompress-mode entry incompressionswithdeny_restricted_extensions=true(because other parts of the batch produce compressed outputs) would hit this error path.If a mixed-mode batch (some Compress, some Decompress for different mints) is valid, this condition needs to also exclude Decompress:
Proposed fix
- } else if checks.has_restricted_extensions - && deny_restricted_extensions - && !compression.mode.is_compress_and_close() - { + } else if checks.has_restricted_extensions + && deny_restricted_extensions + && !compression.mode.is_compress_and_close() + && !compression.mode.is_decompress() + {If the protocol guarantees that Decompress never co-occurs with compressed outputs, this is safe as-is—but please confirm.
#!/bin/bash # Check if ZCompressionMode has is_decompress() method and how Decompress appears in compressions rg -n "is_decompress\|fn is_compress_and_close\|enum ZCompressionMode\|Decompress" --type=rust -C 3 | head -80programs/compressed-token/program/docs/EXTENSIONS.md (1)
367-377:⚠️ Potential issue | 🟡 Minor
MintExtensionFlagsstruct in docs is missing the newhas_mint_close_authorityfield.The code in
check_mint_extensions.rs:238now returnshas_mint_close_authorityinMintExtensionFlags, but the struct shown here doesn't include it. Per coding guidelines, documentation should accurately match implementation. As per coding guidelines, verify that all struct definitions in documentation accurately match the actual implementation.Proposed fix
MintExtensionFlags { has_pausable: bool, has_permanent_delegate: bool, has_default_account_state: bool, // Extension exists (restricted) default_state_frozen: bool, // Current state is Frozen (for CToken creation) has_transfer_fee: bool, has_transfer_hook: bool, + has_mint_close_authority: bool, }
🤖 Fix all issues with AI agents
In `@program-libs/token-interface/src/token_2022_extensions.rs`:
- Around line 6-16: Update the doc comment for RESTRICTED_EXTENSION_TYPES to
match the actual enum entries: include "mint close authority"
(ExtensionType::MintCloseAuthority) alongside pausable, permanent delegate,
transfer fees, transfer hooks, and default frozen state, and reword the sentence
to reflect that these extension types require compression_only mode and have
special behaviors incompatible with standard compressed token transfers; ensure
the comment references RESTRICTED_EXTENSION_TYPES and
ExtensionType::MintCloseAuthority so it stays consistent with the array content.
In `@programs/compressed-token/program/src/ctoken/transfer/shared.rs`:
- Around line 50-53: Replace the misleading ProgramError::InvalidAccountData in
the self-transfer authorization check with the project-standard
ErrorCode::OwnerMismatch; specifically, in the block that tests is_owner and
is_delegate (the if !is_owner && !is_delegate branch in transfer/shared.rs),
return ErrorCode::OwnerMismatch (converted to a ProgramError as appropriate in
this codebase) instead of ProgramError::InvalidAccountData so it matches
owner_validation.rs semantics for owner/delegate failures.
In `@sdk-libs/macros/src/light_pdas/backend.rs`:
- Around line 134-136: The current borrow_error() implementation returns a bare
"?" which loses explicit error mapping; update the borrow_error() function in
backend.rs to emit code that maps borrow failures into
light_account::LightSdkTypesError::ConstraintViolation (matching the Pinocchio
backend behavior), i.e., produce a TokenStream that converts the underlying
error into LightSdkTypesError::ConstraintViolation so borrow failures propagate
with that explicit error variant.
In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 450-462: The pinocchio branch in compress.rs builds an assert
message that lacks the contact guidance present in other branches; update the
assert in the backend.is_pinocchio() block (the const COMPRESSED_SIZE
calculation using `#qualified_type`::INIT_SPACE) so its panic string matches the
other branches by appending " If you need support for larger accounts, send a
message to team@lightprotocol.com" to the existing error text about exceeding
the 800-byte limit for compressed accounts.
- Around line 413-437: The generated impl for `#enum_name`::process_compress
hardcodes pinocchio::account_info::AccountInfo instead of using the
backend-provided type; fix by obtaining let account_info_type =
backend.account_info_type(); and interpolate that symbol into the quoted code
for the accounts parameter and any other account-info occurrences, keeping the
rest of the method (borsh deserialization into
`#account_crate`::CompressAndCloseParams, call to
`#account_crate`::process_compress_pda_accounts_idempotent, and error mapping)
unchanged.
- Around line 379-401: The non-Pinocchio branch hardcodes AccountInfo type in
impl `#enum_name`::compress_dispatch instead of using the backend-provided type;
replace the hardcoded anchor_lang::prelude::AccountInfo<'info> with the
previously fetched backend.account_info_type() value (the result of
backend.account_info_type()) in the compress_dispatch signature and any uses,
ensuring the generated impl for `#enum_name` and references to `#account_crate`
still compile and updating imports/usages accordingly so the backend abstraction
is honored.
| /// Restricted extension types that require compression_only mode. | ||
| /// These extensions have special behaviors (pausable, permanent delegate, fees, hooks, | ||
| /// default frozen state) that are incompatible with standard compressed token transfers. | ||
| pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 5] = [ | ||
| pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 6] = [ | ||
| ExtensionType::Pausable, | ||
| ExtensionType::PermanentDelegate, | ||
| ExtensionType::TransferFeeConfig, | ||
| ExtensionType::TransferHook, | ||
| ExtensionType::DefaultAccountState, | ||
| ExtensionType::MintCloseAuthority, | ||
| ]; |
There was a problem hiding this comment.
Doc comment on RESTRICTED_EXTENSION_TYPES also needs updating.
Lines 7-8 list "pausable, permanent delegate, fees, hooks, default frozen state" but now the array includes MintCloseAuthority. Same staleness issue as the method doc.
📝 Suggested fix
/// Restricted extension types that require compression_only mode.
-/// These extensions have special behaviors (pausable, permanent delegate, fees, hooks,
-/// default frozen state) that are incompatible with standard compressed token transfers.
+/// These extensions have special behaviors (pausable, permanent delegate, fees, hooks,
+/// default frozen state, mint close authority) that are incompatible with standard compressed token transfers.
pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 6] = [🤖 Prompt for AI Agents
In `@program-libs/token-interface/src/token_2022_extensions.rs` around lines 6 -
16, Update the doc comment for RESTRICTED_EXTENSION_TYPES to match the actual
enum entries: include "mint close authority" (ExtensionType::MintCloseAuthority)
alongside pausable, permanent delegate, transfer fees, transfer hooks, and
default frozen state, and reword the sentence to reflect that these extension
types require compression_only mode and have special behaviors incompatible with
standard compressed token transfers; ensure the comment references
RESTRICTED_EXTENSION_TYPES and ExtensionType::MintCloseAuthority so it stays
consistent with the array content.
| if !is_owner && !is_delegate { | ||
| msg!("Self-transfer authority must be owner or delegate"); | ||
| return Err(ProgramError::InvalidAccountData); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the ErrorCode enum definition in the compressed-token program
fd 'error' --type f -e rs programs/compressed-token/program/src | head -20Repository: Lightprotocol/light-protocol
Length of output: 141
🏁 Script executed:
# Look for ErrorCode enum variants
rg 'enum ErrorCode|InvalidAuthority|InvalidOwner|UnauthorizedSigner|InvalidAccountData' --type=rs -A 2 programs/compressed-token/program/src | head -40Repository: Lightprotocol/light-protocol
Length of output: 98
🏁 Script executed:
# Check the owner_validation.rs utilities
find programs/compressed-token/program/src -name '*owner_validation*' -o -name '*validation*' | head -10Repository: Lightprotocol/light-protocol
Length of output: 136
🏁 Script executed:
# Get the full context of the file being reviewed
cat -n programs/compressed-token/program/src/ctoken/transfer/shared.rs | sed -n '40,65p'Repository: Lightprotocol/light-protocol
Length of output: 1330
🏁 Script executed:
# Check CLAUDE.md or docs for error handling guidance
find programs/compressed-token/program -name 'CLAUDE.md' -o -name '*.md' -path '*/docs/*' | xargs grep -l 'error\|Error\|ErrorCode' 2>/dev/null | head -5Repository: Lightprotocol/light-protocol
Length of output: 358
🏁 Script executed:
# Examine the owner_validation.rs file
cat -n programs/compressed-token/program/src/shared/owner_validation.rsRepository: Lightprotocol/light-protocol
Length of output: 5435
🏁 Script executed:
# Look for ErrorCode enum definition
fd . programs/compressed-token/program/src -name 'error.rs' -o -name 'lib.rs' | xargs grep -l 'ErrorCode'Repository: Lightprotocol/light-protocol
Length of output: 304
🏁 Script executed:
# Check the full ErrorCode enum in lib.rs or error.rs
rg 'enum ErrorCode' -A 30 programs/compressed-token/program/srcRepository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check CLAUDE.md for error handling guidance
cat programs/compressed-token/program/CLAUDE.md | grep -A 10 -B 2 'error\|Error\|validation'Repository: Lightprotocol/light-protocol
Length of output: 8759
🏁 Script executed:
# Get broader context: find the function name and purpose
cat -n programs/compressed-token/program/src/ctoken/transfer/shared.rs | sed -n '30,60p'Repository: Lightprotocol/light-protocol
Length of output: 1301
🏁 Script executed:
# Check if there are other uses of this error pattern in the file
rg 'InvalidAccountData' programs/compressed-token/program/src/ctoken/transfer/shared.rs -B 3 -A 1Repository: Lightprotocol/light-protocol
Length of output: 1248
🏁 Script executed:
# Verify ErrorCode enum to see all available variants
rg 'pub enum ErrorCode|OwnerMismatch|InvalidAuthority' programs/compressed-token/anchor/src/lib.rs -A 2Repository: Lightprotocol/light-protocol
Length of output: 448
Use ErrorCode::OwnerMismatch instead of InvalidAccountData for authorization failures.
InvalidAccountData is semantically misleading here—the problem isn't malformed data but failed authorization. The codebase already establishes ErrorCode::OwnerMismatch as the standard error for owner/delegate validation failures (see owner_validation.rs lines 77 and 120). Use that instead for consistency.
Suggested fix
if !is_owner && !is_delegate {
msg!("Self-transfer authority must be owner or delegate");
- return Err(ProgramError::InvalidAccountData);
+ return Err(ErrorCode::OwnerMismatch.into());
}🤖 Prompt for AI Agents
In `@programs/compressed-token/program/src/ctoken/transfer/shared.rs` around lines
50 - 53, Replace the misleading ProgramError::InvalidAccountData in the
self-transfer authorization check with the project-standard
ErrorCode::OwnerMismatch; specifically, in the block that tests is_owner and
is_delegate (the if !is_owner && !is_delegate branch in transfer/shared.rs),
return ErrorCode::OwnerMismatch (converted to a ProgramError as appropriate in
this codebase) instead of ProgramError::InvalidAccountData so it matches
owner_validation.rs semantics for owner/delegate failures.
| fn borrow_error(&self) -> TokenStream { | ||
| quote! { ? } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /repo && find . -type f -name "backend.rs" | grep -E "light_pdas|macros"Repository: Lightprotocol/light-protocol
Length of output: 127
🏁 Script executed:
cd /repo && cat -n sdk-libs/macros/src/light_pdas/backend.rs | head -200Repository: Lightprotocol/light-protocol
Length of output: 127
🏁 Script executed:
cd /repo && rg "borrow_error" --type rust -B 3 -A 5 sdk-libs/macros/Repository: Lightprotocol/light-protocol
Length of output: 127
🏁 Script executed:
cd /repo && rg "AccountBorrowFailed\|BorrowAccountDataFailed" --type rust -B 2 -A 2Repository: Lightprotocol/light-protocol
Length of output: 127
🏁 Script executed:
find . -type f -name "backend.rs" | grep -E "light_pdas|macros"Repository: Lightprotocol/light-protocol
Length of output: 115
🏁 Script executed:
cat -n sdk-libs/macros/src/light_pdas/backend.rs | head -200Repository: Lightprotocol/light-protocol
Length of output: 7840
🏁 Script executed:
rg "borrow_error" --type rust -B 3 -A 5 sdk-libs/macros/Repository: Lightprotocol/light-protocol
Length of output: 4589
🏁 Script executed:
rg "AccountBorrowFailed\|BorrowAccountDataFailed" --type rustRepository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
rg "enum LightSdkTypesError\|impl.*LightSdkTypesError" --type rust -A 15 sdk-libs/Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
rg "Borsh\|BorrowFailed\|BorrowAccountData" --type rust sdk-libs/ | grep -E "LightSdkTypesError|error"Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
fd "error.rs" sdk-libs/ --type fRepository: Lightprotocol/light-protocol
Length of output: 377
🏁 Script executed:
cat -n sdk-libs/sdk-types/src/error.rsRepository: Lightprotocol/light-protocol
Length of output: 7053
🏁 Script executed:
rg "LightSdkTypesError" --type rust sdk-libs/sdk-types/ -B 2 -A 10Repository: Lightprotocol/light-protocol
Length of output: 50385
🏁 Script executed:
cat -n sdk-libs/macros/src/light_pdas/program/compress.rs | grep -A 10 -B 5 "try_borrow_data"Repository: Lightprotocol/light-protocol
Length of output: 2617
🏁 Script executed:
rg "try_borrow_data" --type rust sdk-libs/macros/ -B 2 -A 5Repository: Lightprotocol/light-protocol
Length of output: 2663
🏁 Script executed:
rg "BorrowAccountDataFailed\|ConstraintViolation" --type rust sdk-libs/sdk-types/src/error.rsRepository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
cat -n program-libs/account-checks/src/error.rs | head -100Repository: Lightprotocol/light-protocol
Length of output: 4656
🏁 Script executed:
rg "From.*BorrowError\|impl.*BorrowError" --type rust sdk-libs/Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
rg "try_borrow_data\(\)" --type rust program-libs/account-checks/ -B 2 -A 2Repository: Lightprotocol/light-protocol
Length of output: 2325
🏁 Script executed:
rg "use.*AccountError\|pub use.*AccountError" --type rust sdk-libs/sdk-types/ | head -20Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
rg "ConstraintViolation" --type rust sdk-libs/sdk-types/src/interface/program/ -B 1 -A 1 | head -30Repository: Lightprotocol/light-protocol
Length of output: 2386
🏁 Script executed:
rg "From<std::cell::BorrowError>\|From<std::cell::BorrowMutError>" --type rust sdk-libs/sdk-types/Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
rg "From.*LightSdkTypesError.*for" --type rust sdk-libs/sdk-types/src/error.rs -A 15Repository: Lightprotocol/light-protocol
Length of output: 2012
Provide explicit error mapping for Anchor borrow errors to match Pinocchio backend.
The quote! { ? } emits valid Rust, but lacks the explicit error handling present in the Pinocchio backend. Borrow failures should map to light_account::LightSdkTypesError::ConstraintViolation to align with how the codebase handles borrow errors throughout validation and decompression logic.
🐛 Proposed fix
- fn borrow_error(&self) -> TokenStream {
- quote! { ? }
- }
+ fn borrow_error(&self) -> TokenStream {
+ quote! {
+ .map_err(|_| light_account::LightSdkTypesError::ConstraintViolation)?
+ }
+ }🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/backend.rs` around lines 134 - 136, The
current borrow_error() implementation returns a bare "?" which loses explicit
error mapping; update the borrow_error() function in backend.rs to emit code
that maps borrow failures into
light_account::LightSdkTypesError::ConstraintViolation (matching the Pinocchio
backend behavior), i.e., produce a TokenStream that converts the underlying
error into LightSdkTypesError::ConstraintViolation so borrow failures propagate
with that explicit error variant.
| } else { | ||
| Ok(quote! { | ||
| impl #enum_name { | ||
| pub fn compress_dispatch<'info>( | ||
| account_info: &anchor_lang::prelude::AccountInfo<'info>, | ||
| meta: &#account_crate::account_meta::CompressedAccountMetaNoLamportsNoAddress, | ||
| index: usize, | ||
| ctx: &mut #account_crate::CompressCtx<'_, 'info>, | ||
| ) -> std::result::Result<(), #sdk_error> { | ||
| use #account_crate::LightDiscriminator; | ||
| use borsh::BorshDeserialize; | ||
| let data = account_info.try_borrow_data()#borrow_error; | ||
| let discriminator: [u8; 8] = data[..8] | ||
| .try_into() | ||
| .map_err(|_| #sdk_error::InvalidInstructionData)?; | ||
| match discriminator { | ||
| #(#compress_arms)* | ||
| _ => Ok(()), | ||
| } | ||
| } | ||
| } | ||
| } | ||
| }) | ||
| }) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Non-pinocchio branch ignores account_info_type fetched from the backend.
Line 320 fetches backend.account_info_type() and the pinocchio branch (line 361) uses it, but the non-pinocchio branch on line 383 hardcodes anchor_lang::prelude::AccountInfo<'info>. This undermines the backend abstraction — if a future backend provides a different AccountInfo type, this branch won't pick it up.
Suggested fix
- pub fn compress_dispatch<'info>(
- account_info: &anchor_lang::prelude::AccountInfo<'info>,
+ pub fn compress_dispatch<'info>(
+ account_info: &#account_info_type,🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 379 - 401,
The non-Pinocchio branch hardcodes AccountInfo type in impl
`#enum_name`::compress_dispatch instead of using the backend-provided type;
replace the hardcoded anchor_lang::prelude::AccountInfo<'info> with the
previously fetched backend.account_info_type() value (the result of
backend.account_info_type()) in the compress_dispatch signature and any uses,
ensuring the generated impl for `#enum_name` and references to `#account_crate`
still compile and updating imports/usages accordingly so the backend abstraction
is honored.
| if backend.is_pinocchio() { | ||
| Ok(quote! { | ||
| impl #enum_name { | ||
| pub fn process_compress( | ||
| accounts: &[pinocchio::account_info::AccountInfo], | ||
| instruction_data: &[u8], | ||
| ) -> std::result::Result<(), #program_error> { | ||
| use borsh::BorshDeserialize; | ||
| let params = #account_crate::CompressAndCloseParams::try_from_slice(instruction_data) | ||
| .map_err(|_| #program_error::InvalidInstructionData)?; | ||
| #account_crate::process_compress_pda_accounts_idempotent( | ||
| accounts, | ||
| ¶ms, | ||
| Self::compress_dispatch, | ||
| crate::LIGHT_CPI_SIGNER, | ||
| &crate::LIGHT_CPI_SIGNER.program_id, | ||
| ) | ||
| .map_err(|e| #program_error::Custom(u32::from(e))) | ||
| } | ||
| } | ||
| } else { | ||
| quote! { | ||
| d if d == #name::LIGHT_DISCRIMINATOR => { | ||
| let mut reader = &data[8..]; | ||
| let mut account_data = #name::deserialize(&mut reader) | ||
| .map_err(|_| light_account_pinocchio::LightSdkTypesError::InvalidInstructionData)?; | ||
| drop(data); | ||
| light_account_pinocchio::prepare_account_for_compression( | ||
| account_info, &mut account_data, meta, index, ctx, | ||
| ) | ||
| } | ||
| } | ||
| } | ||
| }).collect(); | ||
|
|
||
| Ok(quote! { | ||
| impl #enum_name { | ||
| pub fn compress_dispatch( | ||
| account_info: &pinocchio::account_info::AccountInfo, | ||
| meta: &light_account_pinocchio::account_meta::CompressedAccountMetaNoLamportsNoAddress, | ||
| index: usize, | ||
| ctx: &mut light_account_pinocchio::CompressCtx<'_>, | ||
| ) -> std::result::Result<(), light_account_pinocchio::LightSdkTypesError> { | ||
| use light_account_pinocchio::LightDiscriminator; | ||
| use borsh::BorshDeserialize; | ||
| let data = account_info.try_borrow_data() | ||
| .map_err(|_| light_account_pinocchio::LightSdkTypesError::Borsh)?; | ||
| let discriminator: [u8; 8] = data[..8] | ||
| .try_into() | ||
| .map_err(|_| light_account_pinocchio::LightSdkTypesError::InvalidInstructionData)?; | ||
| match discriminator { | ||
| #(#compress_arms)* | ||
| _ => Ok(()), | ||
| } | ||
| } | ||
| } | ||
| }) | ||
| }) | ||
| } else { | ||
| // Anchor version doesn't have this method on enum - it uses the separate processor | ||
| Ok(quote! {}) | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Pinocchio branch also hardcodes pinocchio::account_info::AccountInfo on line 417.
Same pattern as the dispatch method — the backend provides account_info_type() but it's not used here. Prefer let account_info_type = backend.account_info_type(); and interpolate it for consistency.
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 413 - 437,
The generated impl for `#enum_name`::process_compress hardcodes
pinocchio::account_info::AccountInfo instead of using the backend-provided type;
fix by obtaining let account_info_type = backend.account_info_type(); and
interpolate that symbol into the quoted code for the accounts parameter and any
other account-info occurrences, keeping the rest of the method (borsh
deserialization into `#account_crate`::CompressAndCloseParams, call to
`#account_crate`::process_compress_pda_accounts_idempotent, and error mapping)
unchanged.
| if backend.is_pinocchio() { | ||
| // For pinocchio, all types use INIT_SPACE constant (no CompressedInitSpace trait) | ||
| quote! { | ||
| const _: () = { | ||
| const COMPRESSED_SIZE: usize = 8 + #qualified_type::INIT_SPACE; | ||
| assert!( | ||
| COMPRESSED_SIZE <= 800, | ||
| concat!( | ||
| "Compressed account '", stringify!(#qualified_type), "' exceeds 800-byte compressible account size limit" | ||
| ) | ||
| ); | ||
| }; | ||
| } |
There was a problem hiding this comment.
Pinocchio size-check error message omits the contact guidance present in the other branches.
Non-pinocchio branches include "If you need support for larger accounts, send a message to team@lightprotocol.com" while the pinocchio branch just says it exceeds the limit. Developers hitting this on pinocchio will get a less helpful message. Worth keeping parity.
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 450 - 462,
The pinocchio branch in compress.rs builds an assert message that lacks the
contact guidance present in other branches; update the assert in the
backend.is_pinocchio() block (the const COMPRESSED_SIZE calculation using
`#qualified_type`::INIT_SPACE) so its panic string matches the other branches by
appending " If you need support for larger accounts, send a message to
team@lightprotocol.com" to the existing error text about exceeding the 800-byte
limit for compressed accounts.
5f5901c to
bc6a490
Compare
| /// Token account | ||
| Token(CompressedTokenAccount), | ||
| /// Compressed mint | ||
| Mint(CompressedAccount), |
There was a problem hiding this comment.
It seems like the Mint variant is treated the same as Account is it necessary?
| any_cold(&self.get_specs_for_instruction(ix)) | ||
| } | ||
| cold_accounts: &[AccountInterface], | ||
| ) -> Result<Vec<AccountSpec<Self::Variant>>, Box<dyn std::error::Error>>; |
There was a problem hiding this comment.
Box<dyn std::error::Error> we should return an error type instead
There was a problem hiding this comment.
| ) -> Result<Vec<AccountSpec<Self::Variant>>, Box<dyn std::error::Error>>; | |
| ) -> Result<Vec<AccountSpec<Self::Variant>>, Self::Error>; |
|
8825bde to
3e869ad
Compare
refactor remove fetch-accounts renaming, simplify trait fomat excl photon-api submodule fix: multi-pass cold account lookup in test indexer RPC Align get_account_interface and get_multiple_account_interfaces with Photon's lookup strategy: search compressed_accounts by onchain_pubkey, then by PDA seed derivation, then token_compressed_accounts, then by token_data.owner. Also fix as_mint() to accept ColdContext::Account since Photon returns mints as generic compressed accounts. Co-authored-by: Cursor <cursoragent@cursor.com> lint fix lint fix: reject rent sponsor self-referencing the token account (#2257) * fix: reject rent sponsor self-referencing the token account Audit issue #9 (INFO): The rent payer could be the same account as the target token account being created. Add a check that rejects this self-reference to prevent accounting issues. * test: add failing test rent sponsor self reference fix: process metadata add/remove actions in sequential order (#2256) * fix: process metadata add/remove actions in sequential order Audit issue #16 (LOW): should_add_key checked for any add and any remove independently, ignoring action ordering. An add-remove-add sequence would incorrectly remove the key. Process actions sequentially so the final state reflects the actual order. * chore: format * test: add randomized test for metadata action processing Validates that process_extensions_config_with_actions produces correct AdditionalMetadataConfig for random sequences of UpdateMetadataField and RemoveMetadataKey actions, covering the add-remove-add bug from audit issue #16. * test: add integration test for audit issue #13 (no double rent charge) Verifies that two compress operations targeting the same compressible CToken account in a single Transfer2 instruction do not double-charge the rent top-up budget. * chore: format extensions_metadata test fix: validate authority on self-transfer early return (#2252) * fix: handle self-transfer in ctoken transfer and transfer_checked Validate that the authority is a signer and is the owner or delegate before allowing self-transfer early return. Previously the self-transfer path returned Ok(()) without any authority validation. * fix: simplify map_or to is_some_and per clippy * fix: use pubkey_eq for self-transfer check * refactor: extract self-transfer validation into shared function Extract duplicate self-transfer check from default.rs and checked.rs into validate_self_transfer() in shared.rs with cold path for authority validation. * chore: format * fix: deduplicate random metadata keys in test_random_mint_action Random key generation could produce duplicate keys, causing DuplicateMetadataKey error (18040) with certain seeds. fix: enforce mint extension checks in cToken-to-cToken decompress (#2246) * fix: enforce mint extension checks in cToken-to-cToken decompress hot path Add enforce_extension_state() to MintExtensionChecks and call it in the Decompress branch when decompress_inputs is None (hot-path, not CompressedOnly restore). This prevents cToken-to-cToken transfers from bypassing pause, transfer fee, and transfer hook restrictions. * fix test chore: reject compress for mints with restricted extensions in build_mint_extension_cache (#2240) * chore: reject compress for mints with restricted extensions in mint check * Update programs/compressed-token/program/src/compressed_token/transfer2/check_extensions.rs Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com> * fix: format else-if condition for lint --------- Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com> fix: token-pool index 0 check (#2239) fix(programs): add MintCloseAuthority as restricted extension (M-03) (#2263) * fix: add MintCloseAuthority as restricted extension (M-03) A mint with MintCloseAuthority can be closed and re-opened with different extensions. Treating it as restricted ensures compressed tokens from such mints require CompressedOnly mode. * test: add MintCloseAuthority compression_only requirement tests Add test coverage for MintCloseAuthority requiring compression_only mode, complementing the fix in f2da063. refactor: light program pinocchio macro (#2247) * refactor: light program pinocchio macro * fix: address CodeRabbit review comments on macro codegen - Fix account order bug in process_update_config (config=0, authority=1) - Use backend-provided serialize/deserialize derives in LightAccountData - Remove redundant is_pinocchio() branch for has_le_bytes unpack fields - DRY doc attribute generation across 4 struct generation methods - Unify unpack_data path using account_crate re-export for both backends chore(libs): bump versions (#2272) fix(programs): allow account-level delegate to compress CToken (M-02) (#2262) * fix: allow account-level delegate to compress tokens from CToken (M-02) check_ctoken_owner() only checked owner and permanent delegate. An account-level delegate (approved via CTokenApprove) could not compress tokens. Added delegate check after permanent delegate. * test: compress by delegate fix: accumulate delegated amount at decompression (#2242) * fix: accumulate delegated amount at decompression * fix lint * refactor: simplify apply_delegate to single accumulation path * fix: ignore delegated_amount without delegate * restore decompress amount check fix programtest, wallet owner tracking for ata fmt and lint upd amm test simplify client usage, remove unnecessary endpoints clean cleanup lint
3e869ad to
805371c
Compare
… conversions with CompressedAccountWithMerkleContext
Summary by CodeRabbit
Refactor
New Features
Bug Fixes / Validation
Tests
Chores