Skip to content

chore: refactor interface#2270

Open
SwenSchaeferjohann wants to merge 9 commits intomainfrom
swen/refactor-interface
Open

chore: refactor interface#2270
SwenSchaeferjohann wants to merge 9 commits intomainfrom
swen/refactor-interface

Conversation

@SwenSchaeferjohann
Copy link
Contributor

@SwenSchaeferjohann SwenSchaeferjohann commented Feb 10, 2026

Summary by CodeRabbit

  • Refactor

    • Streamlined account APIs to an explicit cold-account load lifecycle and unified backend-aware code generation.
  • New Features

    • Easier token-account conversion and expanded cold-account discovery with ATA wallet-owner propagation.
    • MintCloseAuthority extension support surfaced.
  • Bug Fixes / Validation

    • Tighter mint/extension checks, revised decompress/delegate and self-transfer handling, and rent-payer validation.
  • Tests

    • New randomized end-to-end metadata action test.
  • Chores

    • Package version bumps.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 10, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors client interfaces and RPC account-loading: removes AccountToFetch and Rpc::fetch_accounts, adds TryFrom<AccountInterface> for TokenAccountInterface and broader mint parsing, changes LightProgramInterface to explicit cold-account loading, enriches indexer/RPC multi-pass token/PDA lookups with ATA wallet_owner propagation, introduces backend-aware macro codegen, and updates compressed-token logic and extensions (delegate handling, restricted-extension checks, self-transfer validation, metadata action ordering, tests).

Changes

Cohort / File(s) Summary
Client interface & RPC
sdk-libs/client/src/interface/account_interface.rs, sdk-libs/client/src/interface/mod.rs, sdk-libs/client/src/rpc/rpc_trait.rs
Added impl TryFrom<AccountInterface> for TokenAccountInterface and expanded mint parsing; removed public AccountToFetch re-export and removed Rpc::fetch_accounts bulk-fetch API.
LightProgramInterface API
sdk-libs/client/src/interface/light_program_interface.rs
Reworked trait surface: program_id() → associated fn; removed many per-instruction spec/caching helpers and AccountToFetch; added instruction_accounts(&self, ix) and load_specs(&self, cold_accounts) for explicit cold-account loading.
Program-test / indexer & RPC lookup
sdk-libs/program-test/src/program_test/rpc.rs, sdk-libs/program-test/src/compressible.rs, sdk-libs/program-test/src/indexer/test_indexer.rs
Expanded multi-pass lookup to include PDA-derived lookups, token_compressed_accounts (by onchain_pubkey/PDA/owner), owner-based fallback; propagate and store wallet_owner for ATAs via new ata_owner_map.
Compressed-token program logic & extensions
programs/compressed-token/program/src/..., programs/compressed-token/anchor/src/lib.rs, programs/compressed-token/program/docs/EXTENSIONS.md, programs/compressed-token/program/tests/extensions_metadata.rs
Enforced restricted-extension rules for compress vs compress-and-close, added MintCloseAuthority support, changed delegate handling/accumulation in decompress, added self-transfer validation to avoid double-borrows, made metadata action ordering state-machine, added randomized metadata-action test, and guarded token_pool_index==0.
Token-2022 extensions
program-libs/token-interface/src/token_2022_extensions.rs
Added MintCloseAuthority to restricted extension list and has_mint_close_authority flag in MintExtensionFlags; updated helpers.
Macros / codegen backend
sdk-libs/macros/src/light_pdas/backend.rs, sdk-libs/macros/src/light_pdas/..., sdk-libs/macros/src/light_pdas/mod.rs
Introduced CodegenBackend trait with AnchorBackend/PinocchioBackend; converted many generation APIs to backend-aware *_with_backend variants and routed existing build APIs to backend-driven paths; exported new backend module.
Crate version bumps
sdk-libs/client/Cargo.toml, sdk-libs/event/Cargo.toml, sdk-libs/program-test/Cargo.toml, sdk-libs/token-client/Cargo.toml, program-libs/compressed-account/Cargo.toml
Package version bumps across several crates (0.20.x → 0.21.0 and compressed-account 0.10.0 → 0.10.1).

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant Rpc as Rpc impl
  participant Indexer as Compressed Indexer
  participant Store as On-chain Store

  Client->>Rpc: get_account_interface(pubkey)
  Rpc->>Indexer: lookup compressed_accounts by pubkey
  Indexer-->>Rpc: compressed account found?/none
  alt compressed account found
    Rpc->>Store: reconstruct AccountInterface (cold token/mint/account)
    Store-->>Rpc: AccountInterface
  else not found
    Rpc->>Indexer: derive PDA seed & lookup
    Indexer-->>Rpc: PDA result?/none
    alt PDA token found
      Rpc->>Store: build CompressedTokenAccount -> AccountInterface::cold_token (use ata_owner_map)
      Store-->>Rpc: AccountInterface::cold_token
    else
      Rpc->>Indexer: get_compressed_token_accounts_by_owner(owner)
      Indexer-->>Rpc: maybe single result
      Rpc-->>Client: AccountInterface::cold_token or NotFound/Error
    end
  end
  Rpc-->>Client: AccountInterface (hot or cold, with wallet_owner if ATA)
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • ananas-block
  • sergeytimoshin

Poem

✨ Cold accounts trace owners' thread,
Macros split paths where backends tread.
Delegates settle, self-transfers rest,
Metadata actions pass the test.
Version bumps hum — a careful refactor.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The PR title 'chore: refactor interface' is vague and generic, using non-descriptive terms that obscure the substantial architectural changes made across multiple files. Consider a more specific title that captures the main refactoring effort, such as 'refactor: extract LightProgramInterface trait and introduce CodegenBackend abstraction' or similar to convey the scope to reviewers scanning history.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch swen/refactor-interface

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/interface/account_interface.rs`:
- Around line 408-429: The cold branch in TryFrom<AccountInterface> for
TokenAccountInterface currently uses ct.token.owner (ct.token.owner) instead of
parsing the PodAccount from ai.account.data like the hot path does, which can
drop owner_override and break ata_bump() that relies on parsed.owner; change the
cold path to parse the PodAccount (same parsing logic used in the None/hot
branch) from ai.account.data and use parsed.owner when constructing
TokenAccountInterface::cold (returning the parsed PodAccount) instead of using
ct.token.owner.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
sdk-libs/program-test/src/program_test/rpc.rs (1)

696-803: 🧹 Nitpick | 🔵 Trivial

Four-pass lookup with progressive fallback is well-structured but has two concerns.

1. Sequential async calls in pass four (lines 786-803):
Each remaining address triggers an individual get_compressed_token_accounts_by_owner call in a serial loop. For large batches with many misses, this serializes latency. In program-test this is tolerable, but if this pattern gets copied to production RPC (photon_indexer), it would be a bottleneck.

2. Silent swallow of ambiguous results (line 795):
Same issue as get_account_interface — when items.len() > 1, the address silently stays None. Consider logging or erroring for consistency with get_token_account_interface.

3. Duplicated token account construction logic:
The pattern of building CompressedTokenAccount { token, account } from indexer results appears in lines 443-446, 525-528, 770-773 (and equivalently in get_token_account_interface). Extracting a helper would reduce the duplication:

Helper to reduce duplication
fn compressed_token_from_indexer_result(
    token_acc: &TokenAccountWithContext,  // adjust type to match
) -> Result<CompressedTokenAccount, RpcError> {
    let compressed_account: CompressedAccount = token_acc
        .compressed_account
        .clone()
        .try_into()
        .map_err(|e| RpcError::CustomError(format!("conversion error: {:?}", e)))?;
    Ok(CompressedTokenAccount {
        token: token_acc.token_data.clone(),
        account: compressed_account,
    })
}
🤖 Fix all issues with AI agents
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 786-803: The fourth pass loop that calls
get_compressed_token_accounts_by_owner on owner_lookup_pubkeys silently leaves
results[original_idx] as None when items.len() > 1, which diverges from
get_token_account_interface/get_account_interface behavior; update the loop
(referencing owner_lookup_pubkeys, owner_lookup_indices,
get_compressed_token_accounts_by_owner, AccountInterface::cold_token, and
results) to detect the ambiguous case (items.len() > 1) and surface an error
instead of dropping the result — either mirror get_token_account_interface’s
error return for ambiguous lookups (return an RpcError with a clear message) or
collect/record partial errors for batch responses so callers are notified of
ambiguous token lookups.
- Around line 448-455: The code passes *address as wallet_owner into
AccountInterface::cold_token (setting PodAccount.owner = address) which is
correct for program-owned token accounts but is a best-effort default and
incorrect for associated token accounts (ATAs) because ata_bump() will fail when
the wallet owner does not equal the ATA pubkey; add a doc comment above the
token lookup paths (where AccountInterface::cold_token is used) explaining this
limitation and recommending callers use get_associated_token_account_interface
when they need correct ATA semantics, and mention ata_bump(), PodAccount.owner,
and AccountInterface::cold_token for clarity.
- Around line 429-471: get_account_interface currently treats multiple matching
token accounts as a non-error and returns None, while
get_token_account_interface returns RpcError::CustomError on ambiguity; update
the fourth-try branch in get_account_interface to mirror
get_token_account_interface by returning an explicit RpcError::CustomError when
result.value.items.len() > 1 (and keep the existing None/Ok behavior for
zero/one matches), or at minimum emit a warning via the logger before returning
None; locate the logic around get_compressed_token_accounts_by_owner and the
handling of items (the block that constructs AccountInterface::cold_token) and
change the ambiguous-case handling to return an RpcError::CustomError with a
clear message about ambiguous token accounts.

Comment on lines 429 to 471

// Third try: lookup in token_compressed_accounts
// Try by onchain_pubkey discriminator, PDA seed, then by token_data.owner
let token_acc = indexer
.find_token_account_by_onchain_pubkey(&address.to_bytes())
.or_else(|| indexer.find_token_account_by_pda_seed(&address.to_bytes()));

if let Some(token_acc) = token_acc {
let compressed_account: CompressedAccount = token_acc
.compressed_account
.clone()
.try_into()
.map_err(|e| RpcError::CustomError(format!("conversion error: {:?}", e)))?;

let compressed_token = CompressedTokenAccount {
token: token_acc.token_data.clone(),
account: compressed_account,
};

return Ok(Response {
context: Context { slot },
value: Some(AccountInterface::cold_token(
*address,
compressed_token,
*address,
)),
});
}

// Fourth try: lookup by token_data.owner (for program-owned tokens where owner == pubkey)
let result = indexer
.get_compressed_token_accounts_by_owner(address, None, None)
.await
.map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?;

let items = result.value.items;
if items.len() == 1 {
let token_acc = items.into_iter().next().unwrap();
return Ok(Response {
context: Context { slot },
value: Some(AccountInterface::cold_token(*address, token_acc, *address)),
});
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent ambiguity handling between get_account_interface and get_token_account_interface.

The fourth try (lines 464-471) silently returns None when multiple token accounts match, whereas get_token_account_interface (line 548-554) returns an explicit RpcError::CustomError for the same condition. This inconsistency can mask bugs — a caller might assume the account doesn't exist when it's actually ambiguous.

At minimum, log a warning or match the error behavior:

Suggested fix: log or error on ambiguous results
             let items = result.value.items;
             if items.len() == 1 {
                 let token_acc = items.into_iter().next().unwrap();
                 return Ok(Response {
                     context: Context { slot },
                     value: Some(AccountInterface::cold_token(*address, token_acc, *address)),
                 });
+            } else if items.len() > 1 {
+                // Match get_token_account_interface behavior
+                return Err(RpcError::CustomError(format!(
+                    "Ambiguous lookup: found {} compressed token accounts for address {}. \
+                     Use get_compressed_token_accounts_by_owner for multiple accounts.",
+                    items.len(),
+                    address
+                )));
             }
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 429 - 471,
get_account_interface currently treats multiple matching token accounts as a
non-error and returns None, while get_token_account_interface returns
RpcError::CustomError on ambiguity; update the fourth-try branch in
get_account_interface to mirror get_token_account_interface by returning an
explicit RpcError::CustomError when result.value.items.len() > 1 (and keep the
existing None/Ok behavior for zero/one matches), or at minimum emit a warning
via the logger before returning None; locate the logic around
get_compressed_token_accounts_by_owner and the handling of items (the block that
constructs AccountInterface::cold_token) and change the ambiguous-case handling
to return an RpcError::CustomError with a clear message about ambiguous token
accounts.

Comment on lines 786 to 803
// Fourth pass: search by token_data.owner (for program-owned tokens)
for (i, pubkey) in owner_lookup_pubkeys.iter().enumerate() {
let original_idx = owner_lookup_indices[i];
let result = indexer
.get_compressed_token_accounts_by_owner(pubkey, None, None)
.await
.map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?;

let items = result.value.items;
if items.len() == 1 {
let token_acc = items.into_iter().next().unwrap();
results[original_idx] = Some(AccountInterface::cold_token(
*addresses[original_idx],
token_acc,
*addresses[original_idx],
));
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fourth pass silently drops ambiguous token lookups — diverges from get_token_account_interface.

Same issue noted in get_account_interface's fourth try. When items.len() > 1, the result stays None rather than returning an error. For batch lookups, a silent drop can be especially confusing since the caller has no signal that the address was found but ambiguous.

At minimum, match get_token_account_interface's behavior or collect these as partial errors in the response.

🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 786 - 803, The
fourth pass loop that calls get_compressed_token_accounts_by_owner on
owner_lookup_pubkeys silently leaves results[original_idx] as None when
items.len() > 1, which diverges from
get_token_account_interface/get_account_interface behavior; update the loop
(referencing owner_lookup_pubkeys, owner_lookup_indices,
get_compressed_token_accounts_by_owner, AccountInterface::cold_token, and
results) to detect the ambiguous case (items.len() > 1) and surface an error
instead of dropping the result — either mirror get_token_account_interface’s
error return for ambiguous lookups (return an RpcError with a clear message) or
collect/record partial errors for batch responses so callers are notified of
ambiguous token lookups.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 15

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
sdk-libs/client/src/interface/account_interface.rs (1)

196-206: 🧹 Nitpick | 🔵 Trivial

Consider guarding as_mint() against false positives when matching ColdContext::Account.

The Mint struct contains an account_type discriminator field (value 1) at byte offset 165. When as_mint() now matches ColdContext::Account, it attempts Borsh deserialization on any compressed account without validating this field first. If a non-mint account's bytes happen to deserialize successfully as Mint, callers receive garbage data wrapped in Some(...).

While Borsh's layout strictness makes this unlikely in practice, the CompressedAccountData already carries discriminator information that should be validated before deserialization—consistent with the codebase's emphasis on discriminator checks for account type safety.

sdk-libs/program-test/src/program_test/rpc.rs (1)

688-847: 🧹 Nitpick | 🔵 Trivial

Significant duplication between single and batch account-interface lookups.

The four-pass cascade in get_multiple_account_interfaces (hot → onchain_pubkey → PDA seed → token onchain/PDA → token owner) is a structural duplicate of get_account_interface. Any future bug fix or behavior change (like the ambiguity handling above) must be synchronized across both methods.

Consider extracting the single-address resolution logic into a shared helper and having the batch method call it per-address (or better, batch the indexer calls where possible). This would eliminate the synchronization burden and reduce the surface area for inconsistencies.

programs/compressed-token/program/docs/EXTENSIONS.md (2)

7-7: ⚠️ Potential issue | 🟡 Minor

Doc says "5 restricted extensions" but the code now has 6.

RESTRICTED_EXTENSION_TYPES in token_2022_extensions.rs is now [ExtensionType; 6] with MintCloseAuthority added. This sentence should read "6 restricted extensions".

📝 Proposed fix
-The compressed token program supports 16 Token-2022 extension types. **5 restricted extensions** require instruction-level validation checks. Pure mint extensions (metadata, group, etc.) are allowed without explicit instruction support.
+The compressed token program supports 16 Token-2022 extension types. **6 restricted extensions** require instruction-level validation checks. Pure mint extensions (metadata, group, etc.) are allowed without explicit instruction support.

As per coding guidelines: "Verify that all function signatures, struct definitions, and behavior described in the documentation accurately match the actual implementation."


34-51: ⚠️ Potential issue | 🟡 Minor

MintCloseAuthority is missing from the quick reference table and the MintExtensionFlags struct listing.

Three documentation gaps for the newly-added restricted extension:

  1. Quick reference table (lines 34–51): No column for MintCloseAuthority. Other restricted extensions each have a column showing per-instruction behavior.
  2. MintExtensionFlags struct (lines 368–377): Missing the has_mint_close_authority: bool field that now exists in code.
  3. Restricted Extensions section: Sections 1–5 cover Pausable, PermanentDelegate, TransferFeeConfig, TransferHook, and DefaultAccountState. There's no "### 6. MintCloseAuthority" section documenting its constraint, validation path, and unchecked instructions.

As per coding guidelines: "Verify that all function signatures, struct definitions, and behavior described in the documentation accurately match the actual implementation" and "Flag any references to deprecated functions, renamed structs, or changed interfaces."

Also applies to: 368-377

sdk-libs/macros/src/light_pdas/accounts/variant.rs (1)

186-227: ⚠️ Potential issue | 🟠 Major

Pinocchio packed seeds have mismatched Pubkey types for data-rooted pubkey seeds.

For pinocchio, generate_seeds_struct_with_backend emits [u8; 32] for all pubkey fields, but generate_packed_seeds_struct_with_backend still uses Pubkey for non-account, non-numeric seeds from packed_field_type. The pack impl then assigns a [u8; 32] into a Pubkey field, which won't compile for data-rooted pubkey seeds. Align the packed seed field type with the pinocchio representation.

🛠️ Proposed fix (pinocchio packed seed field type alignment)
-                let ty = &sf.packed_field_type;
-                quote! { pub `#name`: `#ty` }
+                let ty = if backend.is_pinocchio() && !sf.is_account_seed && !sf.has_le_bytes {
+                    quote! { [u8; 32] }
+                } else {
+                    sf.packed_field_type.clone()
+                };
+                quote! { pub `#name`: `#ty` }
🤖 Fix all issues with AI agents
In `@programs/compressed-token/anchor/src/lib.rs`:
- Around line 72-74: The guard in the function that checks token_pool_index
(using token_pool_index and NUM_MAX_POOL_ACCOUNTS) correctly rejects index==0
but returns ErrorCode::InvalidTokenPoolBump whose #[msg(...)] text currently
reads "Max number of token pools reached" and is misleading; update the error
message for InvalidTokenPoolBump to a neutral, accurate phrase such as "Token
pool index out of valid range" so both the index==0 and
index>=NUM_MAX_POOL_ACCOUNTS cases are correctly described (do not add a new
enum variant to avoid shifting discriminants), and ensure references in
add_token_pool and create_token_pool comments/tests reflect the updated message.

In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs`:
- Around line 98-102: The code currently calls
mint_checks.enforce_extension_state() when decompress_inputs.is_none(), which
enforces extension-state on decompress paths and contradicts EXTENSIONS.md;
update the logic so that mint_checks.enforce_extension_state() is NOT called for
decompress operations — i.e., skip/remove the enforce_extension_state() call
whenever the operation is a decompress (use the existing decompress_inputs flag
or the function's decompress-detection logic in
compress_or_decompress_ctokens.rs to detect decompress) and only enforce
extension state for non-decompress (mint/other) flows.

In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/decompress.rs`:
- Around line 118-138: The code silently drops delegated_amount when an existing
ctoken.delegate() differs from the input_delegate; update the branch that
computes delegate_is_set (the if-let on ctoken.delegate() and the pubkey_eq
check) to emit a diagnostic msg!() warning when a mismatch is detected (include
the existing delegate and input_delegate pubkeys and the delegated_amount)
before returning/continuing, so callers can see when delegate state was skipped;
alternatively, if silent behavior must remain, add a concise inline comment next
to delegate_is_set explaining why differing delegates are intentionally ignored.

In `@programs/compressed-token/program/src/ctoken/transfer/checked.rs`:
- Around line 53-57: The early return from validate_self_transfer(source,
destination, &accounts[ACCOUNT_AUTHORITY])? currently skips all extension
processing (pausable and compressible top-up); instead, before returning for a
self-transfer, explicitly run the same pausable check and compressible top-up
logic used for normal transfers (i.e., call the pausable-extension validator and
the compressible top-up routine you use elsewhere) and only return Ok(()) if
those checks succeed or decide to no-op; alternatively, if the intended
semantics are to allow silent no-ops, add a clear comment documenting that
self-transfers intentionally bypass pausable and top-up checks.

In `@programs/compressed-token/program/tests/extensions_metadata.rs`:
- Around line 68-72: The randomized test always sets RemoveMetadataKeyAction {
idempotent: 1 }, so the non-idempotent branch is never exercised; add
deterministic coverage for idempotent: 0 by creating explicit test cases that
(a) perform a RemoveMetadataKeyAction with idempotent: 0 against a known-present
key and assert success, and (b) perform a RemoveMetadataKeyAction with
idempotent: 0 against a known-absent key and assert the expected error; locate
the test helper/fixture that seeds extensions and metadata (where
random_key(rng) and actions.push(Action::RemoveMetadataKey(...)) are used) and
either (1) add two new tests using RemoveMetadataKeyAction with idempotent: 0
and fixed keys, or (2) extend the generator to sometimes emit idempotent: 0 with
a controlled non-random case so both present and absent key behaviors are
validated.
- Around line 200-253: The test never covers empty name/symbol/uri because
random_value() always returns at least one byte; update the randomized test to
occasionally generate empty metadata fields so
process_extensions_config_with_actions and compute_expected_config are exercised
with zero-length values. Concretely, inside
test_metadata_config_with_actions_randomized change how TokenMetadata.name,
symbol, and uri are populated (or adjust random_value()) to sometimes return an
empty Vec/empty value (e.g., with a small probability based on the StdRng) so
some iterations pass empty metadata into TokenMetadata and the rest remain
randomized.

In `@sdk-libs/macros/src/light_pdas/program/instructions.rs`:
- Around line 531-547: The is_packed_token implementation in the impl of
HasTokenVariant for LightAccountData currently matches LightAccountVariant and
uses a wildcard arm that is unreachable for mint-only builds; replace the match
with a deterministic false return (i.e., make LightAccountData::is_packed_token
always return false) so it matches the PDA-only path and does not imply token
variants exist; update the impl inside mod __trait_impls for
LightAccountData/LightAccountVariant accordingly.
- Around line 68-85: The function generate_light_program_items_with_backend has
too many parameters; create a CodegenConfig struct (e.g., pub(crate) struct
CodegenConfig { compressible_accounts: Vec<CompressibleAccountInfo>, pda_seeds:
Option<Vec<TokenSeedSpec>>, token_seeds: Option<Vec<TokenSeedSpec>>,
instruction_data: Vec<InstructionDataSpec>, crate_ctx:
crate::light_pdas::parsing::CrateContext /*or &CrateContext as appropriate*/,
has_mint_fields: bool, has_ata_fields: bool, pda_variant_code: TokenStream,
enum_name: Option<syn::Ident> }) and replace the long parameter list with a
single config: &CodegenConfig in generate_light_program_items_with_backend;
update callers generate_light_program_items and
generate_light_program_pinocchio_items to construct and pass the config
(preserving ownership/borrowing semantics), adjust any uses inside the body to
read fields from config, and remove or keep the clippy allow as desired to
reduce boolean-swapping risk and simplify call sites.
- Around line 549-561: The #[allow(unused_variables)] on the 9-tuple
destructuring is too broad and may hide real unused-variable warnings; change
the binding so the Anchor (non-pinocchio) branch uses a normal destructure
without the crate-level allow and the Pinocchio branch either returns a tuple of
underscore-prefixed names or is destructured into a separately annotated
binding—specifically adjust the let that creates (compress_accounts,
compress_dispatch_fn, compress_processor_fn, compress_instruction,
processor_module, init_config_accounts, update_config_accounts,
init_config_instruction, update_config_instruction) and the conditional using
backend.is_pinocchio() so only the variables that are legitimately unused in the
Pinocchio path are named with leading underscores (or the Pinocchio result is
captured in a separate let with #[allow(unused_variables)]) rather than applying
#[allow(unused_variables)] to the whole tuple.
- Around line 191-193: The generated method compression_info_mut_opt on
LightAccountVariant currently panics; instead provide a non-panicking sentinel
to return a &mut Option<#account_crate::CompressionInfo>. Add a static mutable
sentinel (e.g. static mut COMPRESSION_INFO_SENTINEL:
Option<#account_crate::CompressionInfo> = None) and return a mutable reference
to it (using an unsafe block) from compression_info_mut_opt, and document the
sentinel usage; alternatively add a field on LightAccountVariant to hold an
Option<#account_crate::CompressionInfo> and return a mutable reference to that
field. Ensure the symbol names compression_info_mut_opt, LightAccountVariant and
`#account_crate`::CompressionInfo are used so callers no longer panic.
- Around line 686-700: The Anchor path hardcodes
AnchorSerialize/AnchorDeserialize and uses `#account_crate`::RentConfig, which
breaks the backend abstraction; update the Anchor branch that emits the
InitConfigParams struct to use the backend-provided derive tokens
(`#serialize_derive` and `#deserialize_derive`) instead of the literal
AnchorSerialize/AnchorDeserialize and replace the direct rent type with a
backend-provided rent_config_type() (or equivalent trait accessor) so both
Pinocchio and Anchor paths use the same backend hooks and consistent re-exported
rent type.
- Around line 811-814: Replace the hardcoded 8-byte discriminators in
instructions.rs with the single source-of-truth constants from the SDK client
(e.g., use INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR,
UPDATE_COMPRESSION_CONFIG_DISCRIMINATOR,
COMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR,
DECOMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR from
sdk-libs/client::interface::instructions) by importing them and referencing
those names instead of the literal arrays; if importing is not possible, add a
compile-time assertion (e.g., const assert_eq!) in the same module to verify the
local arrays equal the SDK constants so divergence fails the build.

In `@sdk-libs/program-test/src/compressible.rs`:
- Around line 231-253: Add a defensive check so missing indexer is surfaced
instead of silently skipping ATA owner recording: before iterating or at the
point you would insert into indexer.ata_owner_map, assert or warn on rpc.indexer
being None (e.g., debug_assert!(rpc.indexer.is_some()) or log::warn! with
context when rpc.indexer.is_none()) and only then call rpc.indexer.as_mut() to
insert; reference rpc.indexer, indexer.ata_owner_map, and the loop that inspects
Token::zero_copy_at / get_compressible_extension so the check is colocated with
the ATA handling.

In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 800-816: The formatting of the `wallet_owner` assignment (the
chain starting with `indexer.ata_owner_map.get(addr).copied().unwrap_or(*addr)`)
does not match rustfmt; run `cargo fmt` and reflow that chain to match the
project's rustfmt rules (either a single-line call or properly wrapped chained
calls) so the assignment to `wallet_owner` in the block that builds
`compressed_token` and sets `results[original_idx] =
Some(AccountInterface::cold_token(*addr, compressed_token, wallet_owner))` is
formatted consistently with the rest of the file.
- Around line 824-829: The loop calling
indexer.get_compressed_token_accounts_by_owner sequentially over
owner_lookup_pubkeys should be converted to parallel execution (e.g., collect
futures into a FuturesUnordered or use futures::future::join_all) so the async
calls run concurrently; create a future for each (pubkey, original_idx) that
performs the .await and map_err, await the collection of results, then iterate
the completed results to restore ordering using original_idx and populate
whatever consumer used `result` before—refer to owner_lookup_pubkeys,
owner_lookup_indices, indexer.get_compressed_token_accounts_by_owner and the
result handling in the loop to locate where to replace the sequential await with
a batched/parallel await.

Comment on lines 72 to 74
if token_pool_index == 0 || token_pool_index >= NUM_MAX_POOL_ACCOUNTS {
return err!(ErrorCode::InvalidTokenPoolBump);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Good guard — but the error variant/message is misleading for the index == 0 case.

Rejecting token_pool_index == 0 here is the right call since pool 0 is created via create_token_pool. However, InvalidTokenPoolBump carries the message "Max number of token pools reached." (line 291), which doesn't describe an index-0 rejection at all. A caller debugging a failed add_token_pool(0) would see a confusing error.

Consider either:

  1. Introducing a distinct error variant for the index-0 case (e.g., TokenPoolIndexZeroReserved), or
  2. At minimum, updating the #[msg(...)] on InvalidTokenPoolBump to be accurate for both rejection paths (e.g., "Token pool index out of valid range").

Since this is an on-chain program and adding a new variant shifts subsequent discriminant values, option 2 (just rewording the message) is the safer path if the IDL is already published.

🤖 Prompt for AI Agents
In `@programs/compressed-token/anchor/src/lib.rs` around lines 72 - 74, The guard
in the function that checks token_pool_index (using token_pool_index and
NUM_MAX_POOL_ACCOUNTS) correctly rejects index==0 but returns
ErrorCode::InvalidTokenPoolBump whose #[msg(...)] text currently reads "Max
number of token pools reached" and is misleading; update the error message for
InvalidTokenPoolBump to a neutral, accurate phrase such as "Token pool index out
of valid range" so both the index==0 and index>=NUM_MAX_POOL_ACCOUNTS cases are
correctly described (do not add a new enum variant to avoid shifting
discriminants), and ensure references in add_token_pool and create_token_pool
comments/tests reflect the updated message.

Comment on lines 98 to 102
if decompress_inputs.is_none() {
if let Some(ref checks) = mint_checks {
checks.enforce_extension_state()?;
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the compress_or_decompress_ctokens function and understand the decompress_inputs logic
rg -n "fn compress_or_decompress_ctokens" --type=rust -A 30

Repository: Lightprotocol/light-protocol

Length of output: 4893


🏁 Script executed:

# Look for where decompress_inputs is set and how it's passed
rg -n "decompress_inputs" --type=rust -B 5 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 15411


🏁 Script executed:

# Find and examine EXTENSIONS.md documentation
fd -t f "EXTENSIONS.md" -o "extensions.md"

Repository: Lightprotocol/light-protocol

Length of output: 217


🏁 Script executed:

# Check for CompressAndClose related logic
rg -n "CompressAndClose\|compress_and_close" --type=rust -C 3

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Find EXTENSIONS.md properly
find . -type f -name "*EXTENSION*" -o -name "*extension*"

Repository: Lightprotocol/light-protocol

Length of output: 1238


🏁 Script executed:

# Check the caller code to understand when decompress_inputs is None vs Some
# Look at how decompress_inputs is constructed before calling compress_or_decompress_ctokens
rg -n "from_compression" --type=rust -B 3 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 3894


🏁 Script executed:

# Search for documentation about decompress behavior in docs or CLAUDE.md files
fd -t f "CLAUDE.md" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 684


🏁 Script executed:

# Read EXTENSIONS.md to check line 41 and understand decompress behavior claims
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | head -60

Repository: Lightprotocol/light-protocol

Length of output: 4466


🏁 Script executed:

# Search for build_mint_extension_cache to find the docstring mentioned in review
rg -n "build_mint_extension_cache" --type=rust -B 3 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 12575


🏁 Script executed:

# Trace where decompress_inputs is set to None or Some - find the caller of process_ctoken_compression
rg -n "process_ctoken_compression" --type=rust -B 5 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 7310


🏁 Script executed:

# Look at the full context around lines 98-102 to understand both branches
cat -n programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs | sed -n '97,120p'

Repository: Lightprotocol/light-protocol

Length of output: 1069


🏁 Script executed:

# Check what enforce_extension_state does and how it blocks operations
rg -n "enforce_extension_state" --type=rust -B 2 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 5202


🏁 Script executed:

# Look at the full EXTENSIONS.md to understand the decompress vs C&C distinction better
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | sed -n '41,60p'

Repository: Lightprotocol/light-protocol

Length of output: 2028


🏁 Script executed:

# Check if there's documentation about CompressAndClose (C&C) vs plain decompress behavior difference
rg -n "compress.*only\|CompressAndClose\|CompressOnly" --type=rust programs/compressed-token/program/docs/EXTENSIONS.md -i -C 2

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check CLAUDE.md files for compression-related guidance
cat programs/compressed-token/program/CLAUDE.md | grep -i "decompress\|compress" -A 3 -B 1

Repository: Lightprotocol/light-protocol

Length of output: 9429


🏁 Script executed:

# Look for any recent documentation about the two decompress paths
rg -n "decompress_inputs\|CompressAndClose" --type=rust programs/compressed-token/program/docs/ -C 2

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the exact logic: when is decompress_inputs None vs Some in the parent caller?
cat -n programs/compressed-token/program/src/compressed_token/transfer2/compression/mod.rs | sed -n '88,110p'

Repository: Lightprotocol/light-protocol

Length of output: 1226


🏁 Script executed:

# Check if there's any specific guidance about decompress behavior in compressed-token CLAUDE.md
cat programs/compressed-token/program/CLAUDE.md | grep -i "bypass\|restrict\|plain\|decompress" -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 4957


🏁 Script executed:

# Look at the full EXTENSIONS.md to see if there's more context about the two paths
cat -n programs/compressed-token/program/docs/EXTENSIONS.md | sed -n '1,100p'

Repository: Lightprotocol/light-protocol

Length of output: 6619


🏁 Script executed:

# Check DecompressCompressOnlyInputs to understand what marks a decompress as "with compress only inputs"
rg -n "struct DecompressCompressOnlyInputs\|try_extract" --type=rust programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/inputs.rs -A 15

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Find DecompressCompressOnlyInputs struct definition
rg -n "struct DecompressCompressOnlyInputs" --type=rust -A 20

Repository: Lightprotocol/light-protocol

Length of output: 3133


🏁 Script executed:

# Search for try_extract to understand when it returns Some vs None
rg -n "fn try_extract" --type=rust programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/ -A 20

Repository: Lightprotocol/light-protocol

Length of output: 2939


Decompress extension-state enforcement contradicts EXTENSIONS.md documentation.

Lines 98-102 enforce mint_checks.enforce_extension_state() when decompress_inputs.is_none(), but EXTENSIONS.md line 41 documents that decompress should allow all restricted extensions unconditionally (line 66: "allowed = Bypasses extension state checks"). The distinction is between decompress operations that consume an input compressed account (decompress_inputs.is_some(), which bypass checks) versus plain decompress without a consumed input (decompress_inputs.is_none(), which currently enforces checks). Either the documentation should clarify this split behavior, or the enforcement should be removed to match the stated decompress bypass guarantee.

🤖 Prompt for AI Agents
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_or_decompress_ctokens.rs`
around lines 98 - 102, The code currently calls
mint_checks.enforce_extension_state() when decompress_inputs.is_none(), which
enforces extension-state on decompress paths and contradicts EXTENSIONS.md;
update the logic so that mint_checks.enforce_extension_state() is NOT called for
decompress operations — i.e., skip/remove the enforce_extension_state() call
whenever the operation is a decompress (use the existing decompress_inputs flag
or the function's decompress-detection logic in
compress_or_decompress_ctokens.rs to detect decompress) and only enforce
extension state for non-decompress (mint/other) flows.

Comment on lines 118 to 138
let Some(delegate_acc) = input_delegate else {
return Ok(());
};

let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() {
pubkey_eq(existing_delegate.array_ref(), delegate_acc.key())
} else {
ctoken
.base
.set_delegate(Some(Pubkey::from(*delegate_acc.key())))?;
if delegated_amount > 0 {
let current = ctoken.base.delegated_amount.get();
ctoken.base.delegated_amount.set(
current
.checked_add(delegated_amount)
.ok_or(ProgramError::ArithmeticOverflow)?,
);
}
} else if delegated_amount > 0 {
msg!("Decompress: delegated_amount > 0 but no delegate");
return Err(TokenError::DecompressDelegatedAmountWithoutDelegate.into());
true
};

if delegate_is_set && delegated_amount > 0 {
let current = ctoken.base.delegated_amount.get();
ctoken.base.delegated_amount.set(
current
.checked_add(delegated_amount)
.ok_or(ProgramError::ArithmeticOverflow)?,
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent delegate mismatch during decompress — verify this is the intended design.

When the CToken already has a delegate that differs from the input delegate, delegate_is_set evaluates to false and the input's delegated_amount is silently dropped. The EXTENSIONS.md doc confirms this is by design ("If existing delegate differs: skip, silent ignore"), but the consequence is real:

If two compressed tokens with different delegates are decompressed into the same CToken, the second token's delegate and delegated_amount are silently lost. This could happen if a client batches multiple decompress operations targeting one CToken.

Consider whether this should at least emit a warning msg!() or return an error instead of silently discarding delegate state. If this is truly the desired behavior, a brief inline comment explaining why it's safe to drop the delegate info would help future readers.

💡 Suggestion: add a diagnostic message or comment
 let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() {
-    pubkey_eq(existing_delegate.array_ref(), delegate_acc.key())
+    let matches = pubkey_eq(existing_delegate.array_ref(), delegate_acc.key());
+    if !matches {
+        msg!("Decompress: input delegate differs from existing, skipping delegation update");
+    }
+    matches
 } else {
     ctoken
         .base
         .set_delegate(Some(Pubkey::from(*delegate_acc.key())))?;
     true
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let Some(delegate_acc) = input_delegate else {
return Ok(());
};
let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() {
pubkey_eq(existing_delegate.array_ref(), delegate_acc.key())
} else {
ctoken
.base
.set_delegate(Some(Pubkey::from(*delegate_acc.key())))?;
if delegated_amount > 0 {
let current = ctoken.base.delegated_amount.get();
ctoken.base.delegated_amount.set(
current
.checked_add(delegated_amount)
.ok_or(ProgramError::ArithmeticOverflow)?,
);
}
} else if delegated_amount > 0 {
msg!("Decompress: delegated_amount > 0 but no delegate");
return Err(TokenError::DecompressDelegatedAmountWithoutDelegate.into());
true
};
if delegate_is_set && delegated_amount > 0 {
let current = ctoken.base.delegated_amount.get();
ctoken.base.delegated_amount.set(
current
.checked_add(delegated_amount)
.ok_or(ProgramError::ArithmeticOverflow)?,
);
}
let Some(delegate_acc) = input_delegate else {
return Ok(());
};
let delegate_is_set = if let Some(existing_delegate) = ctoken.delegate() {
let matches = pubkey_eq(existing_delegate.array_ref(), delegate_acc.key());
if !matches {
msg!("Decompress: input delegate differs from existing, skipping delegation update");
}
matches
} else {
ctoken
.base
.set_delegate(Some(Pubkey::from(*delegate_acc.key())))?;
true
};
if delegate_is_set && delegated_amount > 0 {
let current = ctoken.base.delegated_amount.get();
ctoken.base.delegated_amount.set(
current
.checked_add(delegated_amount)
.ok_or(ProgramError::ArithmeticOverflow)?,
);
}
🤖 Prompt for AI Agents
In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/decompress.rs`
around lines 118 - 138, The code silently drops delegated_amount when an
existing ctoken.delegate() differs from the input_delegate; update the branch
that computes delegate_is_set (the if-let on ctoken.delegate() and the pubkey_eq
check) to emit a diagnostic msg!() warning when a mismatch is detected (include
the existing delegate and input_delegate pubkeys and the delegated_amount)
before returning/continuing, so callers can see when delegate state was skipped;
alternatively, if silent behavior must remain, add a concise inline comment next
to delegate_is_set explaining why differing delegates are intentionally ignored.

Comment on lines 53 to 57
// Self-transfer: validate authority but skip token movement to avoid
// double mutable borrow panic in pinocchio process_transfer.
if validate_self_transfer(source, destination, &accounts[ACCOUNT_AUTHORITY])? {
return Ok(());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Self-transfer early exit bypasses extension checks (pausable, compressible top-up).

Returning Ok(()) here skips all downstream processing. Two things worth considering:

  1. Pausable accounts: If the token account has a pausable extension and is currently paused, a self-transfer will silently succeed instead of being rejected. Depending on your semantics for "paused," this may be acceptable (no funds move) or a policy violation (paused means no transfer instructions should succeed).

  2. Compressible top-up: Self-transfers won't trigger the rent top-up calculation/payment. If users rely on self-transfers to keep accounts alive, the top-up window could be missed.

If both are intentional (a self-transfer is truly a no-op and doesn't need guardrails), a brief comment documenting that decision would help future readers understand the choice.

🤖 Prompt for AI Agents
In `@programs/compressed-token/program/src/ctoken/transfer/checked.rs` around
lines 53 - 57, The early return from validate_self_transfer(source, destination,
&accounts[ACCOUNT_AUTHORITY])? currently skips all extension processing
(pausable and compressible top-up); instead, before returning for a
self-transfer, explicitly run the same pausable check and compressible top-up
logic used for normal transfers (i.e., call the pausable-extension validator and
the compressible top-up routine you use elsewhere) and only return Ok(()) if
those checks succeed or decide to no-op; alternatively, if the intended
semantics are to allow silent no-ops, add a clear comment documenting that
self-transfers intentionally bypass pausable and top-up checks.

Comment on lines 68 to 72
2 => actions.push(Action::RemoveMetadataKey(RemoveMetadataKeyAction {
extension_index: 0,
key: random_key(rng),
idempotent: 1,
})),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

idempotent is always 1 — non-idempotent removal path is untested.

Every RemoveMetadataKey action sets idempotent: 1. If idempotent: 0 triggers an error when the key doesn't exist, that branch is never exercised here. Given the random nature of the key selection, a non-idempotent remove hitting a missing key would cause a hard-to-reproduce failure, so I understand the choice — but consider adding a dedicated (non-randomized) test case for idempotent: 0 with both present and absent keys to cover that path.

🤖 Prompt for AI Agents
In `@programs/compressed-token/program/tests/extensions_metadata.rs` around lines
68 - 72, The randomized test always sets RemoveMetadataKeyAction { idempotent: 1
}, so the non-idempotent branch is never exercised; add deterministic coverage
for idempotent: 0 by creating explicit test cases that (a) perform a
RemoveMetadataKeyAction with idempotent: 0 against a known-present key and
assert success, and (b) perform a RemoveMetadataKeyAction with idempotent: 0
against a known-absent key and assert the expected error; locate the test
helper/fixture that seeds extensions and metadata (where random_key(rng) and
actions.push(Action::RemoveMetadataKey(...)) are used) and either (1) add two
new tests using RemoveMetadataKeyAction with idempotent: 0 and fixed keys, or
(2) extend the generator to sometimes emit idempotent: 0 with a controlled
non-random case so both present and absent key behaviors are validated.

Comment on lines 811 to 814
pub const INITIALIZE_COMPRESSION_CONFIG: [u8; 8] = [133, 228, 12, 169, 56, 76, 222, 61];
pub const UPDATE_COMPRESSION_CONFIG: [u8; 8] = [135, 215, 243, 81, 163, 146, 33, 70];
pub const COMPRESS_ACCOUNTS_IDEMPOTENT: [u8; 8] = [70, 236, 171, 120, 164, 93, 113, 181];
pub const DECOMPRESS_ACCOUNTS_IDEMPOTENT: [u8; 8] = [114, 67, 61, 123, 234, 31, 1, 112];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Search for the discriminator arrays in the codebase
echo "=== Searching for discriminator values ==="
rg -n "133, 228, 12, 169, 56, 76, 222, 61|135, 215, 243, 81, 163, 146, 33, 70|70, 236, 171, 120, 164, 93, 113, 181|114, 67, 61, 123, 234, 31, 1, 112" --type rust

echo ""
echo "=== Searching for INITIALIZE_COMPRESSION_CONFIG constant ==="
rg -n "INITIALIZE_COMPRESSION_CONFIG|UPDATE_COMPRESSION_CONFIG|COMPRESS_ACCOUNTS_IDEMPOTENT|DECOMPRESS_ACCOUNTS_IDEMPOTENT" --type rust

echo ""
echo "=== Looking for light-client discriminator definitions ==="
fd -e rs -e md | xargs grep -l "light-client\|light_client" | head -10

echo ""
echo "=== Checking the file to understand context ==="
head -850 sdk-libs/macros/src/light_pdas/program/instructions.rs | tail -80

Repository: Lightprotocol/light-protocol

Length of output: 9558


🏁 Script executed:

#!/bin/bash

# Look for CLAUDE.md or documentation about discriminators in light-protocol
echo "=== Checking for CLAUDE.md files with discriminator guidance ==="
find . -name "CLAUDE.md" -o -name "DISCRIMINATOR.md" | xargs grep -l "discriminator" 2>/dev/null | head -5

echo ""
echo "=== Examining light-protocol documentation ==="
if [ -f "program-libs/account-checks/docs/DISCRIMINATOR.md" ]; then
  cat program-libs/account-checks/docs/DISCRIMINATOR.md
fi

echo ""
echo "=== Checking if SDK uses LightDiscriminator derive for similar structures ==="
rg -n "LightDiscriminator" --type rust -A 2 -B 2 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 10332


Hardcoded instruction discriminators should be imported from the SDK client, not duplicated.

These four 8-byte discriminators are defined as public constants in sdk-libs/client/src/interface/instructions.rs (e.g., INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR), but here they are hardcoded locally. While all instances currently match, duplicating these constants across the codebase creates a maintenance risk: if the SDK client constants ever change, the generated Pinocchio code will silently diverge unless manually updated.

Instead, import and reference the constants from the client SDK, or add a compile-time assertion verifying the hardcoded values match their source of truth.

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs` around lines 811 -
814, Replace the hardcoded 8-byte discriminators in instructions.rs with the
single source-of-truth constants from the SDK client (e.g., use
INITIALIZE_COMPRESSION_CONFIG_DISCRIMINATOR,
UPDATE_COMPRESSION_CONFIG_DISCRIMINATOR,
COMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR,
DECOMPRESS_ACCOUNTS_IDEMPOTENT_DISCRIMINATOR from
sdk-libs/client::interface::instructions) by importing them and referencing
those names instead of the literal arrays; if importing is not possible, add a
compile-time assertion (e.g., const assert_eq!) in the same module to verify the
local arrays equal the SDK constants so divergence fails the build.

Comment on lines +231 to +253
// Track ATA wallet owners before compression closes the on-chain accounts.
// For ATAs, the compressed token owner is the ATA pubkey (not the wallet owner),
// so we need to record the wallet owner for cold ATA lookups during decompression.
{
use light_zero_copy::traits::ZeroCopyAt;
for pubkey in compress_accounts_compression_only
.iter()
.chain(compress_accounts_normal.iter())
{
if let Some(account) = rpc.context.get_account(pubkey) {
if let Ok((ctoken, _)) = Token::zero_copy_at(&account.data) {
if let Some(ext) = ctoken.get_compressible_extension() {
if ext.is_ata != 0 {
let wallet_owner = Pubkey::from(ctoken.owner.to_bytes());
if let Some(indexer) = rpc.indexer.as_mut() {
indexer.ata_owner_map.insert(*pubkey, wallet_owner);
}
}
}
}
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Silent no-op when rpc.indexer is None could hide bugs.

The if let Some(indexer) = rpc.indexer.as_mut() guard on line 245 means that if the indexer isn't configured, ATA wallet-owner tracking is silently skipped. Downstream cold-ATA lookups would then fail with no indication of why. In the LightProgramTest context the indexer should always be present, so a debug_assert! or log::warn! would catch misconfiguration early rather than manifesting as a mysterious decompression failure later.

Suggested defensive check
-                        if let Some(indexer) = rpc.indexer.as_mut() {
-                            indexer.ata_owner_map.insert(*pubkey, wallet_owner);
-                        }
+                        if let Some(indexer) = rpc.indexer.as_mut() {
+                            indexer.ata_owner_map.insert(*pubkey, wallet_owner);
+                        } else {
+                            log::warn!(
+                                "claim_and_compress: indexer not set, ATA wallet owner for {} will not be tracked",
+                                pubkey
+                            );
+                        }
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/compressible.rs` around lines 231 - 253, Add a
defensive check so missing indexer is surfaced instead of silently skipping ATA
owner recording: before iterating or at the point you would insert into
indexer.ata_owner_map, assert or warn on rpc.indexer being None (e.g.,
debug_assert!(rpc.indexer.is_some()) or log::warn! with context when
rpc.indexer.is_none()) and only then call rpc.indexer.as_mut() to insert;
reference rpc.indexer, indexer.ata_owner_map, and the loop that inspects
Token::zero_copy_at / get_compressible_extension so the check is colocated with
the ATA handling.

Comment on lines +824 to +829
for (i, pubkey) in owner_lookup_pubkeys.iter().enumerate() {
let original_idx = owner_lookup_indices[i];
let result = indexer
.get_compressed_token_accounts_by_owner(pubkey, None, None)
.await
.map_err(|e| RpcError::CustomError(format!("indexer error: {}", e)))?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Sequential .await in a loop — consider batching if this path is hit with many addresses.

Each iteration of this loop makes an individual async call to get_compressed_token_accounts_by_owner. For test scenarios with many unresolved addresses, this becomes O(n) sequential round-trips. Since TestIndexer is in-memory the latency is negligible today, but if this pattern ever gets reused for a real RPC backend, it'll be a bottleneck.

A FuturesUnordered / join_all approach would parallelize these, but I'd understand deferring that given the test-only context. Just worth keeping in mind.

🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/rpc.rs` around lines 824 - 829, The
loop calling indexer.get_compressed_token_accounts_by_owner sequentially over
owner_lookup_pubkeys should be converted to parallel execution (e.g., collect
futures into a FuturesUnordered or use futures::future::join_all) so the async
calls run concurrently; create a future for each (pubkey, original_idx) that
performs the .await and map_err, await the collection of results, then iterate
the completed results to restore ordering using original_idx and populate
whatever consumer used `result` before—refer to owner_lookup_pubkeys,
owner_lookup_indices, indexer.get_compressed_token_accounts_by_owner and the
result handling in the loop to locate where to replace the sequential await with
a batched/parallel await.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
programs/compressed-token/program/src/ctoken/transfer/default.rs (1)

46-57: ⚠️ Potential issue | 🟠 Major

Self-transfer check correctly placed before the hot path to prevent double mutable borrow panic.

The ordering here is important and correct: the self-transfer check must precede the 165-byte hot path because process_transfer would panic on source == destination due to a double mutable borrow. The pubkey_eq cost is negligible so it doesn't hurt the hot path.

That said, the same concern from checked.rs applies: this early exit bypasses pausable and compressible top-up checks. If the policy decision is that self-transfers are true no-ops that don't need those guardrails, adding a brief doc comment (e.g., // Self-transfers are intentional no-ops; pausable/top-up checks are not needed because no state changes occur.) would help future readers understand the intent was deliberate, not accidental.

Suggested documentation
     // Self-transfer: validate authority but skip token movement to avoid
     // double mutable borrow panic in pinocchio process_transfer.
+    // Note: This intentionally skips pausable and compressible top-up checks
+    // because no token state changes occur on self-transfers.
     if validate_self_transfer(source, destination, &accounts[ACCOUNT_AUTHORITY])? {
         return Ok(());
     }
program-libs/token-interface/src/token_2022_extensions.rs (1)

150-160: ⚠️ Potential issue | 🟡 Minor

Doc comment on has_restricted_extensions is now stale — doesn't list MintCloseAuthority.

The comment on lines 151-152 enumerates the restricted extensions but omits the newly added MintCloseAuthority. Keeping this in sync avoids confusion for the next person who reads this.

📝 Suggested fix
     /// Returns true if mint has any restricted extensions.
-    /// Restricted extensions (Pausable, PermanentDelegate, TransferFee, TransferHook,
-    /// DefaultAccountState) require compression_only mode when compressing tokens.
+    /// Restricted extensions (Pausable, PermanentDelegate, TransferFee, TransferHook,
+    /// DefaultAccountState, MintCloseAuthority) require compression_only mode when compressing tokens.
     pub const fn has_restricted_extensions(&self) -> bool {
programs/compressed-token/program/src/extensions/mod.rs (1)

140-148: 🧹 Nitpick | 🔵 Trivial

Doc comment "without heap allocations" is inaccurate — line 234 collects into a Vec.

The ArrayVec avoids repeated heap reallocations during building, but configs.into_iter().collect() on line 234 performs a final heap allocation. The comment could say "minimizing heap allocations" instead.

📝 Suggested doc fix
-/// Build metadata config directly without heap allocations using ArrayVec
+/// Build metadata config using stack-based ArrayVec to minimize heap allocations
 /// Processes all possible keys and determines final state (SPL Token-2022 compatible)
programs/compressed-token/program/src/compressed_token/transfer2/check_extensions.rs (1)

121-146: ⚠️ Potential issue | 🟠 Major

Decompress-mode compressions may be incorrectly blocked by the new restricted-extensions guard.

The new else if at lines 140–145 blocks any compression entry where has_restricted_extensions && deny_restricted_extensions && !is_compress_and_close(). Per the doc comment (lines 61–64) and EXTENSIONS.md (line 41), Decompress should bypass restricted-extension enforcement. But a Decompress-mode entry in compressions with deny_restricted_extensions=true (because other parts of the batch produce compressed outputs) would hit this error path.

If a mixed-mode batch (some Compress, some Decompress for different mints) is valid, this condition needs to also exclude Decompress:

Proposed fix
-            } else if checks.has_restricted_extensions
-                && deny_restricted_extensions
-                && !compression.mode.is_compress_and_close()
-            {
+            } else if checks.has_restricted_extensions
+                && deny_restricted_extensions
+                && !compression.mode.is_compress_and_close()
+                && !compression.mode.is_decompress()
+            {

If the protocol guarantees that Decompress never co-occurs with compressed outputs, this is safe as-is—but please confirm.

#!/bin/bash
# Check if ZCompressionMode has is_decompress() method and how Decompress appears in compressions
rg -n "is_decompress\|fn is_compress_and_close\|enum ZCompressionMode\|Decompress" --type=rust -C 3 | head -80
programs/compressed-token/program/docs/EXTENSIONS.md (1)

367-377: ⚠️ Potential issue | 🟡 Minor

MintExtensionFlags struct in docs is missing the new has_mint_close_authority field.

The code in check_mint_extensions.rs:238 now returns has_mint_close_authority in MintExtensionFlags, but the struct shown here doesn't include it. Per coding guidelines, documentation should accurately match implementation. As per coding guidelines, verify that all struct definitions in documentation accurately match the actual implementation.

Proposed fix
 MintExtensionFlags {
     has_pausable: bool,
     has_permanent_delegate: bool,
     has_default_account_state: bool,  // Extension exists (restricted)
     default_state_frozen: bool,       // Current state is Frozen (for CToken creation)
     has_transfer_fee: bool,
     has_transfer_hook: bool,
+    has_mint_close_authority: bool,
 }
🤖 Fix all issues with AI agents
In `@program-libs/token-interface/src/token_2022_extensions.rs`:
- Around line 6-16: Update the doc comment for RESTRICTED_EXTENSION_TYPES to
match the actual enum entries: include "mint close authority"
(ExtensionType::MintCloseAuthority) alongside pausable, permanent delegate,
transfer fees, transfer hooks, and default frozen state, and reword the sentence
to reflect that these extension types require compression_only mode and have
special behaviors incompatible with standard compressed token transfers; ensure
the comment references RESTRICTED_EXTENSION_TYPES and
ExtensionType::MintCloseAuthority so it stays consistent with the array content.

In `@programs/compressed-token/program/src/ctoken/transfer/shared.rs`:
- Around line 50-53: Replace the misleading ProgramError::InvalidAccountData in
the self-transfer authorization check with the project-standard
ErrorCode::OwnerMismatch; specifically, in the block that tests is_owner and
is_delegate (the if !is_owner && !is_delegate branch in transfer/shared.rs),
return ErrorCode::OwnerMismatch (converted to a ProgramError as appropriate in
this codebase) instead of ProgramError::InvalidAccountData so it matches
owner_validation.rs semantics for owner/delegate failures.

In `@sdk-libs/macros/src/light_pdas/backend.rs`:
- Around line 134-136: The current borrow_error() implementation returns a bare
"?" which loses explicit error mapping; update the borrow_error() function in
backend.rs to emit code that maps borrow failures into
light_account::LightSdkTypesError::ConstraintViolation (matching the Pinocchio
backend behavior), i.e., produce a TokenStream that converts the underlying
error into LightSdkTypesError::ConstraintViolation so borrow failures propagate
with that explicit error variant.

In `@sdk-libs/macros/src/light_pdas/program/compress.rs`:
- Around line 450-462: The pinocchio branch in compress.rs builds an assert
message that lacks the contact guidance present in other branches; update the
assert in the backend.is_pinocchio() block (the const COMPRESSED_SIZE
calculation using `#qualified_type`::INIT_SPACE) so its panic string matches the
other branches by appending " If you need support for larger accounts, send a
message to team@lightprotocol.com" to the existing error text about exceeding
the 800-byte limit for compressed accounts.
- Around line 413-437: The generated impl for `#enum_name`::process_compress
hardcodes pinocchio::account_info::AccountInfo instead of using the
backend-provided type; fix by obtaining let account_info_type =
backend.account_info_type(); and interpolate that symbol into the quoted code
for the accounts parameter and any other account-info occurrences, keeping the
rest of the method (borsh deserialization into
`#account_crate`::CompressAndCloseParams, call to
`#account_crate`::process_compress_pda_accounts_idempotent, and error mapping)
unchanged.
- Around line 379-401: The non-Pinocchio branch hardcodes AccountInfo type in
impl `#enum_name`::compress_dispatch instead of using the backend-provided type;
replace the hardcoded anchor_lang::prelude::AccountInfo<'info> with the
previously fetched backend.account_info_type() value (the result of
backend.account_info_type()) in the compress_dispatch signature and any uses,
ensuring the generated impl for `#enum_name` and references to `#account_crate`
still compile and updating imports/usages accordingly so the backend abstraction
is honored.

Comment on lines 6 to 16
/// Restricted extension types that require compression_only mode.
/// These extensions have special behaviors (pausable, permanent delegate, fees, hooks,
/// default frozen state) that are incompatible with standard compressed token transfers.
pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 5] = [
pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 6] = [
ExtensionType::Pausable,
ExtensionType::PermanentDelegate,
ExtensionType::TransferFeeConfig,
ExtensionType::TransferHook,
ExtensionType::DefaultAccountState,
ExtensionType::MintCloseAuthority,
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Doc comment on RESTRICTED_EXTENSION_TYPES also needs updating.

Lines 7-8 list "pausable, permanent delegate, fees, hooks, default frozen state" but now the array includes MintCloseAuthority. Same staleness issue as the method doc.

📝 Suggested fix
 /// Restricted extension types that require compression_only mode.
-/// These extensions have special behaviors (pausable, permanent delegate, fees, hooks,
-/// default frozen state) that are incompatible with standard compressed token transfers.
+/// These extensions have special behaviors (pausable, permanent delegate, fees, hooks,
+/// default frozen state, mint close authority) that are incompatible with standard compressed token transfers.
 pub const RESTRICTED_EXTENSION_TYPES: [ExtensionType; 6] = [
🤖 Prompt for AI Agents
In `@program-libs/token-interface/src/token_2022_extensions.rs` around lines 6 -
16, Update the doc comment for RESTRICTED_EXTENSION_TYPES to match the actual
enum entries: include "mint close authority" (ExtensionType::MintCloseAuthority)
alongside pausable, permanent delegate, transfer fees, transfer hooks, and
default frozen state, and reword the sentence to reflect that these extension
types require compression_only mode and have special behaviors incompatible with
standard compressed token transfers; ensure the comment references
RESTRICTED_EXTENSION_TYPES and ExtensionType::MintCloseAuthority so it stays
consistent with the array content.

Comment on lines 50 to 53
if !is_owner && !is_delegate {
msg!("Self-transfer authority must be owner or delegate");
return Err(ProgramError::InvalidAccountData);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, find the ErrorCode enum definition in the compressed-token program
fd 'error' --type f -e rs programs/compressed-token/program/src | head -20

Repository: Lightprotocol/light-protocol

Length of output: 141


🏁 Script executed:

# Look for ErrorCode enum variants
rg 'enum ErrorCode|InvalidAuthority|InvalidOwner|UnauthorizedSigner|InvalidAccountData' --type=rs -A 2 programs/compressed-token/program/src | head -40

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

# Check the owner_validation.rs utilities
find programs/compressed-token/program/src -name '*owner_validation*' -o -name '*validation*' | head -10

Repository: Lightprotocol/light-protocol

Length of output: 136


🏁 Script executed:

# Get the full context of the file being reviewed
cat -n programs/compressed-token/program/src/ctoken/transfer/shared.rs | sed -n '40,65p'

Repository: Lightprotocol/light-protocol

Length of output: 1330


🏁 Script executed:

# Check CLAUDE.md or docs for error handling guidance
find programs/compressed-token/program -name 'CLAUDE.md' -o -name '*.md' -path '*/docs/*' | xargs grep -l 'error\|Error\|ErrorCode' 2>/dev/null | head -5

Repository: Lightprotocol/light-protocol

Length of output: 358


🏁 Script executed:

# Examine the owner_validation.rs file
cat -n programs/compressed-token/program/src/shared/owner_validation.rs

Repository: Lightprotocol/light-protocol

Length of output: 5435


🏁 Script executed:

# Look for ErrorCode enum definition
fd . programs/compressed-token/program/src -name 'error.rs' -o -name 'lib.rs' | xargs grep -l 'ErrorCode'

Repository: Lightprotocol/light-protocol

Length of output: 304


🏁 Script executed:

# Check the full ErrorCode enum in lib.rs or error.rs
rg 'enum ErrorCode' -A 30 programs/compressed-token/program/src

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check CLAUDE.md for error handling guidance
cat programs/compressed-token/program/CLAUDE.md | grep -A 10 -B 2 'error\|Error\|validation'

Repository: Lightprotocol/light-protocol

Length of output: 8759


🏁 Script executed:

# Get broader context: find the function name and purpose
cat -n programs/compressed-token/program/src/ctoken/transfer/shared.rs | sed -n '30,60p'

Repository: Lightprotocol/light-protocol

Length of output: 1301


🏁 Script executed:

# Check if there are other uses of this error pattern in the file
rg 'InvalidAccountData' programs/compressed-token/program/src/ctoken/transfer/shared.rs -B 3 -A 1

Repository: Lightprotocol/light-protocol

Length of output: 1248


🏁 Script executed:

# Verify ErrorCode enum to see all available variants
rg 'pub enum ErrorCode|OwnerMismatch|InvalidAuthority' programs/compressed-token/anchor/src/lib.rs -A 2

Repository: Lightprotocol/light-protocol

Length of output: 448


Use ErrorCode::OwnerMismatch instead of InvalidAccountData for authorization failures.

InvalidAccountData is semantically misleading here—the problem isn't malformed data but failed authorization. The codebase already establishes ErrorCode::OwnerMismatch as the standard error for owner/delegate validation failures (see owner_validation.rs lines 77 and 120). Use that instead for consistency.

Suggested fix
     if !is_owner && !is_delegate {
         msg!("Self-transfer authority must be owner or delegate");
-        return Err(ProgramError::InvalidAccountData);
+        return Err(ErrorCode::OwnerMismatch.into());
     }
🤖 Prompt for AI Agents
In `@programs/compressed-token/program/src/ctoken/transfer/shared.rs` around lines
50 - 53, Replace the misleading ProgramError::InvalidAccountData in the
self-transfer authorization check with the project-standard
ErrorCode::OwnerMismatch; specifically, in the block that tests is_owner and
is_delegate (the if !is_owner && !is_delegate branch in transfer/shared.rs),
return ErrorCode::OwnerMismatch (converted to a ProgramError as appropriate in
this codebase) instead of ProgramError::InvalidAccountData so it matches
owner_validation.rs semantics for owner/delegate failures.

Comment on lines 134 to 136
fn borrow_error(&self) -> TokenStream {
quote! { ? }
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cd /repo && find . -type f -name "backend.rs" | grep -E "light_pdas|macros"

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

cd /repo && cat -n sdk-libs/macros/src/light_pdas/backend.rs | head -200

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

cd /repo && rg "borrow_error" --type rust -B 3 -A 5 sdk-libs/macros/

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

cd /repo && rg "AccountBorrowFailed\|BorrowAccountDataFailed" --type rust -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

find . -type f -name "backend.rs" | grep -E "light_pdas|macros"

Repository: Lightprotocol/light-protocol

Length of output: 115


🏁 Script executed:

cat -n sdk-libs/macros/src/light_pdas/backend.rs | head -200

Repository: Lightprotocol/light-protocol

Length of output: 7840


🏁 Script executed:

rg "borrow_error" --type rust -B 3 -A 5 sdk-libs/macros/

Repository: Lightprotocol/light-protocol

Length of output: 4589


🏁 Script executed:

rg "AccountBorrowFailed\|BorrowAccountDataFailed" --type rust

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "enum LightSdkTypesError\|impl.*LightSdkTypesError" --type rust -A 15 sdk-libs/

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "Borsh\|BorrowFailed\|BorrowAccountData" --type rust sdk-libs/ | grep -E "LightSdkTypesError|error"

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

fd "error.rs" sdk-libs/ --type f

Repository: Lightprotocol/light-protocol

Length of output: 377


🏁 Script executed:

cat -n sdk-libs/sdk-types/src/error.rs

Repository: Lightprotocol/light-protocol

Length of output: 7053


🏁 Script executed:

rg "LightSdkTypesError" --type rust sdk-libs/sdk-types/ -B 2 -A 10

Repository: Lightprotocol/light-protocol

Length of output: 50385


🏁 Script executed:

cat -n sdk-libs/macros/src/light_pdas/program/compress.rs | grep -A 10 -B 5 "try_borrow_data"

Repository: Lightprotocol/light-protocol

Length of output: 2617


🏁 Script executed:

rg "try_borrow_data" --type rust sdk-libs/macros/ -B 2 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 2663


🏁 Script executed:

rg "BorrowAccountDataFailed\|ConstraintViolation" --type rust sdk-libs/sdk-types/src/error.rs

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

cat -n program-libs/account-checks/src/error.rs | head -100

Repository: Lightprotocol/light-protocol

Length of output: 4656


🏁 Script executed:

rg "From.*BorrowError\|impl.*BorrowError" --type rust sdk-libs/

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "try_borrow_data\(\)" --type rust program-libs/account-checks/ -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 2325


🏁 Script executed:

rg "use.*AccountError\|pub use.*AccountError" --type rust sdk-libs/sdk-types/ | head -20

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "ConstraintViolation" --type rust sdk-libs/sdk-types/src/interface/program/ -B 1 -A 1 | head -30

Repository: Lightprotocol/light-protocol

Length of output: 2386


🏁 Script executed:

rg "From<std::cell::BorrowError>\|From<std::cell::BorrowMutError>" --type rust sdk-libs/sdk-types/

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "From.*LightSdkTypesError.*for" --type rust sdk-libs/sdk-types/src/error.rs -A 15

Repository: Lightprotocol/light-protocol

Length of output: 2012


Provide explicit error mapping for Anchor borrow errors to match Pinocchio backend.

The quote! { ? } emits valid Rust, but lacks the explicit error handling present in the Pinocchio backend. Borrow failures should map to light_account::LightSdkTypesError::ConstraintViolation to align with how the codebase handles borrow errors throughout validation and decompression logic.

🐛 Proposed fix
-    fn borrow_error(&self) -> TokenStream {
-        quote! { ? }
-    }
+    fn borrow_error(&self) -> TokenStream {
+        quote! {
+            .map_err(|_| light_account::LightSdkTypesError::ConstraintViolation)?
+        }
+    }
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/backend.rs` around lines 134 - 136, The
current borrow_error() implementation returns a bare "?" which loses explicit
error mapping; update the borrow_error() function in backend.rs to emit code
that maps borrow failures into
light_account::LightSdkTypesError::ConstraintViolation (matching the Pinocchio
backend behavior), i.e., produce a TokenStream that converts the underlying
error into LightSdkTypesError::ConstraintViolation so borrow failures propagate
with that explicit error variant.

Comment on lines 379 to 401
} else {
Ok(quote! {
impl #enum_name {
pub fn compress_dispatch<'info>(
account_info: &anchor_lang::prelude::AccountInfo<'info>,
meta: &#account_crate::account_meta::CompressedAccountMetaNoLamportsNoAddress,
index: usize,
ctx: &mut #account_crate::CompressCtx<'_, 'info>,
) -> std::result::Result<(), #sdk_error> {
use #account_crate::LightDiscriminator;
use borsh::BorshDeserialize;
let data = account_info.try_borrow_data()#borrow_error;
let discriminator: [u8; 8] = data[..8]
.try_into()
.map_err(|_| #sdk_error::InvalidInstructionData)?;
match discriminator {
#(#compress_arms)*
_ => Ok(()),
}
}
}
}
})
})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Non-pinocchio branch ignores account_info_type fetched from the backend.

Line 320 fetches backend.account_info_type() and the pinocchio branch (line 361) uses it, but the non-pinocchio branch on line 383 hardcodes anchor_lang::prelude::AccountInfo<'info>. This undermines the backend abstraction — if a future backend provides a different AccountInfo type, this branch won't pick it up.

Suggested fix
-                    pub fn compress_dispatch<'info>(
-                        account_info: &anchor_lang::prelude::AccountInfo<'info>,
+                    pub fn compress_dispatch<'info>(
+                        account_info: &#account_info_type,
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 379 - 401,
The non-Pinocchio branch hardcodes AccountInfo type in impl
`#enum_name`::compress_dispatch instead of using the backend-provided type;
replace the hardcoded anchor_lang::prelude::AccountInfo<'info> with the
previously fetched backend.account_info_type() value (the result of
backend.account_info_type()) in the compress_dispatch signature and any uses,
ensuring the generated impl for `#enum_name` and references to `#account_crate`
still compile and updating imports/usages accordingly so the backend abstraction
is honored.

Comment on lines 413 to 437
if backend.is_pinocchio() {
Ok(quote! {
impl #enum_name {
pub fn process_compress(
accounts: &[pinocchio::account_info::AccountInfo],
instruction_data: &[u8],
) -> std::result::Result<(), #program_error> {
use borsh::BorshDeserialize;
let params = #account_crate::CompressAndCloseParams::try_from_slice(instruction_data)
.map_err(|_| #program_error::InvalidInstructionData)?;
#account_crate::process_compress_pda_accounts_idempotent(
accounts,
&params,
Self::compress_dispatch,
crate::LIGHT_CPI_SIGNER,
&crate::LIGHT_CPI_SIGNER.program_id,
)
.map_err(|e| #program_error::Custom(u32::from(e)))
}
}
} else {
quote! {
d if d == #name::LIGHT_DISCRIMINATOR => {
let mut reader = &data[8..];
let mut account_data = #name::deserialize(&mut reader)
.map_err(|_| light_account_pinocchio::LightSdkTypesError::InvalidInstructionData)?;
drop(data);
light_account_pinocchio::prepare_account_for_compression(
account_info, &mut account_data, meta, index, ctx,
)
}
}
}
}).collect();

Ok(quote! {
impl #enum_name {
pub fn compress_dispatch(
account_info: &pinocchio::account_info::AccountInfo,
meta: &light_account_pinocchio::account_meta::CompressedAccountMetaNoLamportsNoAddress,
index: usize,
ctx: &mut light_account_pinocchio::CompressCtx<'_>,
) -> std::result::Result<(), light_account_pinocchio::LightSdkTypesError> {
use light_account_pinocchio::LightDiscriminator;
use borsh::BorshDeserialize;
let data = account_info.try_borrow_data()
.map_err(|_| light_account_pinocchio::LightSdkTypesError::Borsh)?;
let discriminator: [u8; 8] = data[..8]
.try_into()
.map_err(|_| light_account_pinocchio::LightSdkTypesError::InvalidInstructionData)?;
match discriminator {
#(#compress_arms)*
_ => Ok(()),
}
}
}
})
})
} else {
// Anchor version doesn't have this method on enum - it uses the separate processor
Ok(quote! {})
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Pinocchio branch also hardcodes pinocchio::account_info::AccountInfo on line 417.

Same pattern as the dispatch method — the backend provides account_info_type() but it's not used here. Prefer let account_info_type = backend.account_info_type(); and interpolate it for consistency.

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 413 - 437,
The generated impl for `#enum_name`::process_compress hardcodes
pinocchio::account_info::AccountInfo instead of using the backend-provided type;
fix by obtaining let account_info_type = backend.account_info_type(); and
interpolate that symbol into the quoted code for the accounts parameter and any
other account-info occurrences, keeping the rest of the method (borsh
deserialization into `#account_crate`::CompressAndCloseParams, call to
`#account_crate`::process_compress_pda_accounts_idempotent, and error mapping)
unchanged.

Comment on lines 450 to 462
if backend.is_pinocchio() {
// For pinocchio, all types use INIT_SPACE constant (no CompressedInitSpace trait)
quote! {
const _: () = {
const COMPRESSED_SIZE: usize = 8 + #qualified_type::INIT_SPACE;
assert!(
COMPRESSED_SIZE <= 800,
concat!(
"Compressed account '", stringify!(#qualified_type), "' exceeds 800-byte compressible account size limit"
)
);
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Pinocchio size-check error message omits the contact guidance present in the other branches.

Non-pinocchio branches include "If you need support for larger accounts, send a message to team@lightprotocol.com" while the pinocchio branch just says it exceeds the limit. Developers hitting this on pinocchio will get a less helpful message. Worth keeping parity.

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/compress.rs` around lines 450 - 462,
The pinocchio branch in compress.rs builds an assert message that lacks the
contact guidance present in other branches; update the assert in the
backend.is_pinocchio() block (the const COMPRESSED_SIZE calculation using
`#qualified_type`::INIT_SPACE) so its panic string matches the other branches by
appending " If you need support for larger accounts, send a message to
team@lightprotocol.com" to the existing error text about exceeding the 800-byte
limit for compressed accounts.

/// Token account
Token(CompressedTokenAccount),
/// Compressed mint
Mint(CompressedAccount),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like the Mint variant is treated the same as Account is it necessary?

any_cold(&self.get_specs_for_instruction(ix))
}
cold_accounts: &[AccountInterface],
) -> Result<Vec<AccountSpec<Self::Variant>>, Box<dyn std::error::Error>>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Box<dyn std::error::Error> we should return an error type instead

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
) -> Result<Vec<AccountSpec<Self::Variant>>, Box<dyn std::error::Error>>;
) -> Result<Vec<AccountSpec<Self::Variant>>, Self::Error>;

@ananas-block
Copy link
Contributor

create_load_instructions Review

1. Error Context Erasure (line 478)

let mint_ix_data = MintInstructionData::try_from(mint_data)
    .map_err(|_| LoadAccountsError::BuildInstruction("invalid mint data".into()))?;

The |_| pattern discards the original TokenError. When conversion fails (e.g., TokenError::UnsupportedExtension), the caller only receives "invalid mint data" with no indication of the actual cause.

Suggestion: .map_err(|e| LoadAccountsError::BuildInstruction(format\!("invalid mint data: {}", e)))

2. Generic MissingAtaContext Error (lines 298, 308, 316)

The same MissingAtaContext error is returned for three distinct failure modes:

  1. as_compressed_token() returns None (line 298)
  2. pod_from_bytes() fails to parse (line 308)
  3. ATA derivation mismatch (line 316)

Users cannot distinguish between corrupted data, wrong account type, or incorrect ATA address.

Suggestion: Split into distinct error variants like MissingAtaCompressedData, InvalidAtaPodData, AtaDerivationMismatch.

3. Duplicate Code (lines 252 & 270)

program_id is extracted twice with identical logic:

// Line 252
let program_id = specs.first().map(|s| s.program_id()).unwrap_or_default();
// ... code ...
// Line 270
let program_id = specs.first().map(|s| s.program_id()).unwrap_or_default();

refactor

remove fetch-accounts

renaming, simplify trait

fomat

excl photon-api submodule

fix: multi-pass cold account lookup in test indexer RPC

Align get_account_interface and get_multiple_account_interfaces with
Photon's lookup strategy: search compressed_accounts by onchain_pubkey,
then by PDA seed derivation, then token_compressed_accounts, then by
token_data.owner. Also fix as_mint() to accept ColdContext::Account
since Photon returns mints as generic compressed accounts.

Co-authored-by: Cursor <cursoragent@cursor.com>

lint

fix lint

fix: reject rent sponsor self-referencing the token account (#2257)

* fix: reject rent sponsor self-referencing the token account

Audit issue #9 (INFO): The rent payer could be the same account as
the target token account being created. Add a check that rejects
this self-reference to prevent accounting issues.

* test: add failing test rent sponsor self reference

fix: process metadata add/remove actions in sequential order (#2256)

* fix: process metadata add/remove actions in sequential order

Audit issue #16 (LOW): should_add_key checked for any add and any
remove independently, ignoring action ordering. An add-remove-add
sequence would incorrectly remove the key. Process actions
sequentially so the final state reflects the actual order.

* chore: format

* test: add randomized test for metadata action processing

Validates that process_extensions_config_with_actions produces correct
AdditionalMetadataConfig for random sequences of UpdateMetadataField
and RemoveMetadataKey actions, covering the add-remove-add bug from
audit issue #16.

* test: add integration test for audit issue #13 (no double rent charge)

Verifies that two compress operations targeting the same compressible
CToken account in a single Transfer2 instruction do not double-charge
the rent top-up budget.

* chore: format extensions_metadata test

fix: validate authority on self-transfer early return (#2252)

* fix: handle self-transfer in ctoken transfer and transfer_checked

Validate that the authority is a signer and is the owner or delegate
before allowing self-transfer early return. Previously the self-transfer
path returned Ok(()) without any authority validation.

* fix: simplify map_or to is_some_and per clippy

* fix: use pubkey_eq for self-transfer check

* refactor: extract self-transfer validation into shared function

Extract duplicate self-transfer check from default.rs and checked.rs
into validate_self_transfer() in shared.rs with cold path for authority
validation.

* chore: format

* fix: deduplicate random metadata keys in test_random_mint_action

Random key generation could produce duplicate keys, causing
DuplicateMetadataKey error (18040) with certain seeds.

fix: enforce mint extension checks in cToken-to-cToken decompress (#2246)

* fix: enforce mint extension checks in cToken-to-cToken decompress hot path

Add enforce_extension_state() to MintExtensionChecks and call it in the
Decompress branch when decompress_inputs is None (hot-path, not
CompressedOnly restore). This prevents cToken-to-cToken transfers from
bypassing pause, transfer fee, and transfer hook restrictions.

* fix test

chore: reject compress for mints with restricted extensions in build_mint_extension_cache (#2240)

* chore: reject compress for mints with restricted extensions in mint check

* Update programs/compressed-token/program/src/compressed_token/transfer2/check_extensions.rs

Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com>

* fix: format else-if condition for lint

---------

Co-authored-by: 0xa5df-c <172008956+0xa5df-c@users.noreply.github.com>

fix: token-pool index 0 check (#2239)

fix(programs): add MintCloseAuthority as restricted extension (M-03) (#2263)

* fix: add MintCloseAuthority as restricted extension (M-03)

A mint with MintCloseAuthority can be closed and re-opened with
different extensions. Treating it as restricted ensures compressed
tokens from such mints require CompressedOnly mode.

* test: add MintCloseAuthority compression_only requirement tests

Add test coverage for MintCloseAuthority requiring compression_only mode,
complementing the fix in f2da063.

refactor: light program pinocchio macro (#2247)

* refactor: light program pinocchio macro

* fix: address CodeRabbit review comments on macro codegen

- Fix account order bug in process_update_config (config=0, authority=1)
- Use backend-provided serialize/deserialize derives in LightAccountData
- Remove redundant is_pinocchio() branch for has_le_bytes unpack fields
- DRY doc attribute generation across 4 struct generation methods
- Unify unpack_data path using account_crate re-export for both backends

chore(libs): bump versions (#2272)

fix(programs): allow account-level delegate to compress CToken (M-02) (#2262)

* fix: allow account-level delegate to compress tokens from CToken (M-02)

check_ctoken_owner() only checked owner and permanent delegate.
An account-level delegate (approved via CTokenApprove) could not
compress tokens. Added delegate check after permanent delegate.

* test: compress by delegate

fix: accumulate delegated amount at decompression (#2242)

* fix: accumulate delegated amount at decompression

* fix lint

* refactor: simplify apply_delegate to single accumulation path

* fix: ignore delegated_amount without delegate

* restore decompress amount check

fix programtest, wallet owner tracking for ata

fmt and lint

upd amm test

simplify client usage, remove unnecessary endpoints

clean

cleanup

lint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants