From 772bb937bb5a844a8a0e40a6e5aea2a508047df7 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Mon, 12 Jan 2026 12:06:06 -0800 Subject: [PATCH 1/9] docs: add blockchain-sync-bip37-dip16.md document --- designdocs/blockchain-sync-bip37-dip16.md | 1020 +++++++++++++++++++++ 1 file changed, 1020 insertions(+) create mode 100644 designdocs/blockchain-sync-bip37-dip16.md diff --git a/designdocs/blockchain-sync-bip37-dip16.md b/designdocs/blockchain-sync-bip37-dip16.md new file mode 100644 index 000000000..2a460bfaa --- /dev/null +++ b/designdocs/blockchain-sync-bip37-dip16.md @@ -0,0 +1,1020 @@ +# Blockchain Synchronization with BIP37 and DIP-16 in DashJ + +## Overview + +This document explains the blockchain synchronization process implemented in DashJ through the `PeerGroup` and `Peer` classes, following: +- **BIP37** (Bloom Filtering) for Simplified Payment Verification (SPV) clients +- **DIP-16** (Headers-First Synchronization) for efficient Dash-specific sync with masternode and quorum support + +DashJ implements a sophisticated multi-stage synchronization strategy that downloads block headers first, then masternode lists and LLMQ quorums, and finally retrieves filtered block bodies based on wallet bloom filters. + +## Table of Contents + +1. [Architecture Overview](#architecture-overview) +2. [DIP-16 Headers-First Synchronization](#dip-16-headers-first-synchronization) +3. [BIP37 Bloom Filter Implementation](#bip37-bloom-filter-implementation) +4. [Key Classes and Components](#key-classes-and-components) +5. [Synchronization Process Flow](#synchronization-process-flow) +6. [Fast Catchup Optimization](#fast-catchup-optimization) +7. [Filter Exhaustion Handling](#filter-exhaustion-handling) +8. [Thread Safety](#thread-safety) + +--- + +## Architecture Overview + +The blockchain synchronization architecture in DashJ consists of two primary classes: + +- **`PeerGroup`** (`core/src/main/java/org/bitcoinj/core/PeerGroup.java`) - Manages multiple peer connections, coordinates blockchain download, and maintains merged bloom filters +- **`Peer`** (`core/src/main/java/org/bitcoinj/core/Peer.java`) - Handles individual peer communication, executes block/header downloads, and processes filtered blocks + +### High-Level Responsibilities + +**PeerGroup:** +- Maintains a pool of peer connections +- Selects and manages the "download peer" for chain synchronization +- Merges bloom filters from all registered wallets using `FilterMerger` +- Distributes updated filters to all connected peers +- Coordinates chain download restart on peer disconnection + +**Peer:** +- Executes the actual blockchain download protocol +- Maintains download state (headers vs. bodies mode) +- Processes filtered blocks and matching transactions +- Detects filter exhaustion and triggers recalculation + +--- + +## DIP-16 Headers-First Synchronization + +[DIP-16](https://github.com/dashpay/dips/blob/master/dip-0016.md) defines the "Headers-First Synchronization" process for Dash wallets, enabling efficient sync by retrieving blockchain data in stages. This is particularly important for Dash because wallets need masternode quorum information before determining which transactions to request. + +### Why Headers-First for Dash? + +Unlike Bitcoin, Dash has additional blockchain data beyond blocks and transactions: +- **Masternode Lists**: Deterministic masternode list (DIP-3) tracking active masternodes +- **LLMQ Quorums**: Long-Living Masternode Quorums (DIP-6) for InstantSend and ChainLocks +- **Governance Objects**: Proposals and votes managed by masternodes + +Headers-first synchronization allows wallets to: +1. Quickly establish the blockchain height and tip +2. Retrieve masternode and quorum data needed for transaction validation +3. Only then request filtered block bodies for relevant wallet transactions + +### Sync Stages + +DashJ implements six distinct synchronization stages (PeerGroup.java:192-204): + +```java +public enum SyncStage { + OFFLINE(0), // No sync in progress, no peers connected + HEADERS(1), // Downloading block headers only (80 bytes each) + MNLIST(2), // Downloading simplified masternode lists (mnlistdiff) + PREBLOCKS(3), // Pre-processing blocks (LLMQ validation, Platform queries) + BLOCKS(4), // Downloading full block bodies (filtered via BIP37) + COMPLETE(5); // Sync complete, monitoring for new blocks +} +``` + +### Stage-by-Stage Synchronization Flow + +#### Stage 1: HEADERS - Block Header Download + +**Objective**: Download all block headers from genesis to chain tip + +**Implementation** (Peer.java:1782-1804): +```java +public void startBlockChainHeaderDownload() { + vDownloadHeaders = true; + final int blocksLeft = getPeerBlockHeightDifference(); + if (blocksLeft >= 0) { + // Fire HeadersDownloadStartedEventListener + lock.lock(); + try { + blockChainHeaderDownloadLocked(Sha256Hash.ZERO_HASH); + } finally { + lock.unlock(); + } + } +} +``` + +**Process**: +1. Send `GetHeadersMessage` with block locator (last 100 headers + exponential backoff) +2. Receive up to 2000 headers per response (protocol version 70218) +3. Add headers to separate `headerChain` (not `blockChain`) +4. Validate headers against checkpoints +5. Continue until receiving fewer than `MAX_HEADERS` (sync complete) + +**Header Processing** (Peer.java:724-838): +```java +protected void processHeaders(HeadersMessage m) throws ProtocolException { + if (vDownloadHeaders && headerChain != null) { + for (Block header : m.getBlockHeaders()) { + if (!headerChain.add(header)) { + log.info("Received bad header - try again"); + blockChainHeaderDownloadLocked(Sha256Hash.ZERO_HASH); + return; + } + } + + if (m.getBlockHeaders().size() < HeadersMessage.MAX_HEADERS) { + system.triggerHeadersDownloadComplete(); // Move to next stage + } else { + blockChainHeaderDownloadLocked(Sha256Hash.ZERO_HASH); // Request more + } + } +} +``` + +**Benefits**: +- Fast: Headers are only ~80 bytes vs. blocks which can be MBs +- Establishes blockchain height quickly +- Enables checkpoint verification +- Provides chain tip for masternode list queries + +#### Stage 2: MNLIST - Masternode List Download + +**Objective**: Synchronize the deterministic masternode list and LLMQ quorums + +**Implementation** (Peer.java:851-876): +```java +public void startMasternodeListDownload() { + try { + StoredBlock masternodeListBlock = headerChain.getChainHead().getHeight() != 0 ? + headerChain.getBlockStore().get( + headerChain.getBestChainHeight() - SigningManager.SIGN_HEIGHT_OFFSET) : + blockChain.getBlockStore().get( + blockChain.getBestChainHeight() - SigningManager.SIGN_HEIGHT_OFFSET); + + if (system.masternodeListManager.getListAtChainTip().getHeight() < + masternodeListBlock.getHeight()) { + if (system.masternodeListManager.requestQuorumStateUpdate( + this, headerChain.getChainHead(), masternodeListBlock)) { + queueMasternodeListDownloadedListeners( + MasternodeListDownloadedListener.Stage.Requesting, null); + } + } else { + system.triggerMnListDownloadComplete(); + } + } catch (BlockStoreException x) { + system.triggerMnListDownloadComplete(); + } +} +``` + +**Masternode List Diff Structure** (SimplifiedMasternodeListDiff.java:15-61): + +The `mnlistdiff` message contains incremental updates to the masternode list: + +```java +public class SimplifiedMasternodeListDiff extends AbstractDiffMessage { + private Sha256Hash prevBlockHash; // Previous block hash + private Sha256Hash blockHash; // Current block hash + PartialMerkleTree cbTxMerkleTree; // Coinbase tx merkle proof + Transaction coinBaseTx; // Coinbase transaction + + // Masternode list updates + protected HashSet deletedMNs; // Removed masternodes + protected ArrayList mnList; // Added/updated MNs + + // LLMQ quorum updates (DIP-4) + protected ArrayList> deletedQuorums; + protected ArrayList newQuorums; + protected HashMap> quorumsCLSigs; // ChainLock sigs +} +``` + +**Process**: +1. Request `mnlistdiff` from current masternode list height to chain tip +2. Apply deletions and additions to local masternode list +3. Validate LLMQ quorum commitments (BLS signatures) +4. Update quorum rotation state for InstantSend/ChainLock validation +5. Trigger completion when masternode list reaches chain tip height + +**Why This Matters**: +- InstantSend requires knowing active quorums to validate locks +- ChainLocks require quorum public keys for signature verification +- Governance requires knowing which masternodes can vote + +#### Stage 3: PREBLOCKS - Pre-Block Processing (Optional) + +**Objective**: Perform application-specific preprocessing before block download + +This stage is optional and activated via the `SYNC_BLOCKS_AFTER_PREPROCESSING` flag. + +**Use Cases**: +- Dash Platform identity queries +- Additional LLMQ validation +- Governance object synchronization +- Application-specific state preparation + +**Implementation** (PeerGroup.java:2387-2410): +```java +mnListDownloadedCallback = new FutureCallback() { + @Override + public void onSuccess(@Nullable Integer listsSynced) { + if (flags.contains(SYNC_BLOCKS_AFTER_PREPROCESSING)) { + setSyncStage(SyncStage.PREBLOCKS); + queuePreBlockDownloadListeners(peer); + } else { + setSyncStage(SyncStage.BLOCKS); + peer.startBlockChainDownload(); + } + } +}; +``` + +#### Stage 4: BLOCKS - Block Body Download (with BIP37 Filtering) + +**Objective**: Download full block bodies filtered by wallet bloom filters + +This stage combines DIP-16's headers-first approach with BIP37 bloom filtering. + +**Transition from Headers to Blocks** (Peer.java:1806-1811): +```java +public void continueDownloadingBlocks() { + if (vDownloadHeaders) { + setDownloadHeaders(false); // Disable header-only mode + startBlockChainDownload(); // Start full block download + } +} +``` + +**Process**: +1. Bloom filter already set during peer connection (see BIP37 section) +2. Switch from `GetHeadersMessage` to `GetBlocksMessage` +3. Request filtered blocks (`MSG_FILTERED_BLOCK`) instead of full blocks +4. Receive `MerkleBlock` (header + partial merkle tree) + matching transactions +5. Validate transactions and add to wallet +6. Continue until block chain catches up with header chain + +**Conditional Logic** (PeerGroup.java:2481-2524): +```java +if (flags.contains(MasternodeSync.SYNC_FLAGS.SYNC_HEADERS_MN_LIST_FIRST)) { + if (peer.getBestHeight() > headerChain.getChainHead().getHeight() && + syncStage.value <= SyncStage.HEADERS.value) { + // STAGE 1: Download headers + setSyncStage(SyncStage.HEADERS); + peer.startBlockChainHeaderDownload(); + + } else if (syncStage.value == SyncStage.MNLIST.value) { + // STAGE 2: Download masternode lists + peer.startMasternodeListDownload(); + + } else if (flags.contains(SYNC_BLOCKS_AFTER_PREPROCESSING) && + syncStage.value < SyncStage.PREBLOCKS.value) { + // STAGE 3: Pre-process blocks + setSyncStage(SyncStage.PREBLOCKS); + queuePreBlockDownloadListeners(peer); + + } else { + // STAGE 4: Download full block bodies + setSyncStage(SyncStage.BLOCKS); + peer.startBlockChainDownload(); + } +} +``` + +#### Stage 5: COMPLETE - Ongoing Synchronization + +**Objective**: Monitor for new blocks and maintain sync state + +Once initial sync completes: +- Listen for new block announcements via `inv` messages +- Validate InstantSend locks using synchronized quorum data +- Verify ChainLocks using quorum signatures +- Process governance proposals and votes +- Maintain masternode list updates + +### Event-Driven Stage Transitions + +Stage transitions are managed via `ListenableFuture` callbacks (PeerGroup.java:2366-2430): + +```java +// Headers download completion callback +headersDownloadedCallback = new FutureCallback() { + @Override + public void onSuccess(@Nullable Boolean aBoolean) { + log.info("Stage header download completed successfully"); + if (aBoolean) { + peer.setDownloadHeaders(false); + setSyncStage(SyncStage.MNLIST); // Transition to masternode list sync + peer.startMasternodeListDownload(); + } + } + + @Override + public void onFailure(Throwable throwable) { + log.info("Stage header download failed"); + peer.setDownloadHeaders(false); + setSyncStage(SyncStage.BLOCKS); // Fall back to direct block download + peer.startBlockChainDownload(); + } +}; + +// Masternode list completion callback +mnListDownloadedCallback = new FutureCallback() { + @Override + public void onSuccess(@Nullable Integer listsSynced) { + if (flags.contains(SYNC_BLOCKS_AFTER_PREPROCESSING)) { + setSyncStage(SyncStage.PREBLOCKS); + queuePreBlockDownloadListeners(peer); + } else { + setSyncStage(SyncStage.BLOCKS); + peer.startBlockChainDownload(); // Transition to block download + } + } +}; +``` + +### Checkpoint-Based Security + +DIP-16 leverages hardcoded checkpoints to prevent deep fork attacks during initial sync (AbstractBlockChain.java:511-513): + +```java +// Check that we aren't connecting a block that fails a checkpoint check +if (!params.passesCheckpoint(storedPrev.getHeight() + 1, block.getHash())) + throw new VerificationException("Block failed checkpoint lockin at " + + (storedPrev.getHeight() + 1)); +``` + +**Checkpoint Structure**: +- Block height +- Block hash +- Timestamp +- Target difficulty +- Aggregated chainwork + +**Benefits**: +- Fast validation of headers without full PoW verification +- Protection against long-range attacks +- Reduced computational requirements for SPV clients + +### Download Progress Tracking + +DIP-16 sync progress is weighted across stages (DownloadProgressTracker.java:54-60): + +```java +private static final double SYNC_HEADERS = 0.30; // 30% of total sync +private static final double SYNC_MASTERNODE_LIST = 0.05; // 5% of total sync +private static final double SYNC_PREDOWNLOAD = 0.05; // 5% of total sync +public double blocksWeight; // 60% of total sync (default) + +double progress = headersWeight * percentHeaders + + mnListWeight * percentMnList + + preBlocksWeight * percentPreBlocks + + blocksWeight * percentBlocks; +``` + +This provides accurate progress reporting across all sync stages. + +### Enabling Headers-First Synchronization + +**Configuration Flags** (MasternodeSync.java:87-88): + +```java +public static final EnumSet SYNC_DEFAULT_SPV_HEADERS_FIRST = + EnumSet.of(SYNC_MASTERNODE_LIST, // Sync masternode lists + SYNC_QUORUM_LIST, // Sync LLMQ quorums + SYNC_CHAINLOCKS, // Validate ChainLocks + SYNC_INSTANTSENDLOCKS, // Validate InstantSend locks + SYNC_SPORKS, // Sync network sporks + SYNC_HEADERS_MN_LIST_FIRST); // Enable headers-first mode +``` + +**Activation** (WalletAppKit.java:142): + +```java +vSystem.masternodeSync.addSyncFlag( + MasternodeSync.SYNC_FLAGS.SYNC_HEADERS_MN_LIST_FIRST); +``` + +### DIP-16 Complete Synchronization Flow + +``` +┌─────────────────────────────────────────────────────────────────┐ +│ STAGE 0: OFFLINE │ +│ - No peers connected │ +│ - Waiting for network │ +└────────────────────────────────┬────────────────────────────────┘ + │ + Peer connects & version handshake complete + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ STAGE 1: HEADERS │ +│ - Send GetHeadersMessage with block locator │ +│ - Receive up to 2000 headers per HeadersMessage │ +│ - Add to headerChain (separate from blockChain) │ +│ - Validate against checkpoints │ +│ - Progress: ~80 bytes per block header │ +│ │ +│ Completion: m.getBlockHeaders().size() < MAX_HEADERS │ +└────────────────────────────────┬────────────────────────────────┘ + │ + system.triggerHeadersDownloadComplete() + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ STAGE 2: MNLIST │ +│ - Request mnlistdiff from masternodeListManager │ +│ - Download SimplifiedMasternodeListDiff messages │ +│ - Apply masternode additions/deletions │ +│ - Validate LLMQ quorum commitments (BLS signatures) │ +│ - Update quorum rotation state │ +│ │ +│ Completion: MN list height >= headerChain tip height │ +└────────────────────────────────┬────────────────────────────────┘ + │ + system.triggerMnListDownloadComplete() + │ + ▼ +┌─────────────────────────────────────────────────────────────────┐ +│ STAGE 3: PREBLOCKS (Optional) │ +│ - Platform identity queries │ +│ - Additional LLMQ validation │ +│ - Governance object sync │ +│ - Application-specific preprocessing │ +│ │ +│ Completion: Application-defined criteria │ +└────────────────────────────────┬────────────────────────────────┘ + │ + queuePreBlockDownloadListeners() + │ + ▼ +┌──────────────────────────────────────────────────────────────────┐ +│ STAGE 4: BLOCKS │ +│ - setDownloadHeaders(false) │ +│ - Send GetBlocksMessage with block locator │ +│ - Request MSG_FILTERED_BLOCK (BIP37 bloom filtering) │ +│ - Receive MerkleBlock + matching transactions │ +│ - Validate transactions, add to wallet │ +│ - Process InstantSend locks (validated via LLMQ data) │ +│ - Verify ChainLocks (validated via quorum signatures) │ +│ │ +│ Completion: blockChain height == headerChain height │ +└────────────────────────────────┬─────────────────────────────────┘ + │ + Chain sync complete event + │ + ▼ +┌──────────────────────────────────────────────────────────────────┐ +│ STAGE 5: COMPLETE │ +│ - Monitor for new block inv messages │ +│ - Validate InstantSend locks on new transactions │ +│ - Verify ChainLocks on new blocks │ +│ - Process governance proposals/votes │ +│ - Maintain masternode list with incremental updates │ +└──────────────────────────────────────────────────────────────────┘ +``` + +### Key DIP-16 Implementation Files + +| Component | File | Key Lines | +|-----------|------|-----------| +| **SyncStage Enum** | PeerGroup.java | 192-204 | +| **Headers Download** | Peer.java | 1782-1804 | +| **Header Processing** | Peer.java | 724-838 | +| **MN List Download** | Peer.java | 851-876 | +| **Stage Transitions** | PeerGroup.java | 2366-2430, 2481-2524 | +| **MN List Diff Structure** | SimplifiedMasternodeListDiff.java | 15-100 | +| **Quorum State Management** | QuorumState.java | 47-180 | +| **Download Progress** | DownloadProgressTracker.java | 42-282 | +| **Checkpoint Validation** | AbstractBlockChain.java | 511-513 | +| **Sync Flags Configuration** | MasternodeSync.java | 57-88 | + +--- + +## BIP37 Bloom Filter Implementation + +BIP37 enables lightweight SPV clients to request only transactions matching a bloom filter, reducing bandwidth and storage requirements. + +### Bloom Filter Lifecycle + +1. **Filter Creation** + - `PeerGroup` aggregates all filter providers (wallets) via `FilterMerger.calculate()` + - Each wallet contributes: + - Watched addresses and public keys + - Output scripts + - Transaction outpoints + - Element count is "stair-stepped" (rounded up by 100) to reduce filter regeneration frequency + +2. **Filter Distribution** + ``` + PeerGroup → FilterMerger.calculate() → BloomFilter + ↓ + Peer.setBloomFilter(filter, andQueryMemPool) + ↓ + Send: FilterLoadMessage to remote peer + Send: MemPoolMessage (if andQueryMemPool=true) + ``` + +3. **Filter Parameters** (from `BloomFilter` class) + - **Maximum size**: 36,000 bytes + - **Maximum hash functions**: 50 + - **Configurable false positive rate**: Default set via `DEFAULT_BLOOM_FILTER_FP_RATE` + - **Tweak**: Random value maintained across filter updates for privacy + - **Update flags**: + - `UPDATE_NONE`: Don't auto-update filter + - `UPDATE_ALL`: Update filter for all matching outputs + - `UPDATE_P2PUBKEY_ONLY`: Update only for P2PK/P2PKH outputs + +### Filter Recalculation Triggers + +Filter recalculation occurs when: + +1. **New keys added to wallet** (`walletKeyEventListener`) +2. **Scripts change** (`walletScriptEventListener`) +3. **P2PK outputs received** (`walletCoinsReceivedEventListener`) +4. **False positive rate exceeds threshold** (`peerListener.onBlocksDownloaded()`) + - Threshold: `bloomFilterFPRate * MAX_FP_RATE_INCREASE` +5. **Manual request** via `PeerGroup.setBloomFilterFalsePositiveRate()` + +### Recalculation Modes + +The `recalculateFastCatchupAndFilter()` method supports three modes: + +| Mode | Description | +|------|-------------| +| `SEND_IF_CHANGED` | Send new filter only if contents changed | +| `DONT_SEND` | Recalculate but don't broadcast to peers | +| `FORCE_SEND_FOR_REFRESH` | Always send, even if unchanged (for high FP rate mitigation) | + +--- + +## Key Classes and Components + +### Core Classes + +| Class | Location | Purpose | +|-------|----------|---------| +| `Peer` | `core/src/main/java/org/bitcoinj/core/Peer.java` | Individual peer connection handling | +| `PeerGroup` | `core/src/main/java/org/bitcoinj/core/PeerGroup.java` | Peer pool management and coordination | +| `BloomFilter` | `core/src/main/java/org/bitcoinj/core/BloomFilter.java` | BIP37 bloom filter implementation | +| `FilterMerger` | `core/src/main/java/org/bitcoinj/net/FilterMerger.java` | Merges filters from multiple providers | +| `FilteredBlock` | `core/src/main/java/org/bitcoinj/core/FilteredBlock.java` | Block with partial merkle tree | +| `PeerFilterProvider` | `core/src/main/java/org/bitcoinj/core/PeerFilterProvider.java` | Interface for filter generation | + +### Key State Variables + +**Peer Class:** +```java +@GuardedBy("lock") private boolean downloadBlockBodies = true; +@GuardedBy("lock") private boolean useFilteredBlocks = false; +private volatile BloomFilter vBloomFilter; +private volatile boolean vDownloadData = true; +private volatile boolean vDownloadHeaders = false; +@Nullable private FilteredBlock currentFilteredBlock; +@GuardedBy("lock") @Nullable private List awaitingFreshFilter; +@GuardedBy("lock") private Sha256Hash lastGetBlocksBegin, lastGetBlocksEnd; +@GuardedBy("lock") private long fastCatchupTimeSecs; +``` + +**PeerGroup Class:** +```java +@GuardedBy("lock") private Peer downloadPeer; +private final FilterMerger bloomFilterMerger; +@GuardedBy("lock") @Nullable private PeerDataEventListener downloadListener; +``` + +--- + +## Synchronization Process Flow + +### 1. Initialization Phase + +``` +Application + ↓ +PeerGroup.start() + ↓ +Connect to peers + ↓ +PeerGroup.handleNewPeer(peer) + ├─→ Calculate merged bloom filter + ├─→ Send filter via peer.setBloomFilter() + └─→ Select download peer (if first peer or > maxConnections/2) +``` + +### 2. Download Peer Selection + +When a download peer is selected (`PeerGroup.startBlockChainDownloadFromPeer()`): + +``` +PeerGroup.startBlockChainDownloadFromPeer(peer) + ├─→ Set downloadPeer = peer + ├─→ Start ChainDownloadSpeedCalculator + └─→ Initiate sync based on sync stage: + ├─→ BLOCKS: peer.startBlockChainDownload() + └─→ PREBLOCKS: Queue pre-block download listeners +``` + +### 3. Block Chain Download Process + +**Core method: `Peer.blockChainDownloadLocked(Sha256Hash toHash)`** + +This method implements the iterative blockchain download: + +``` +blockChainDownloadLocked(toHash) + ↓ +Build BlockLocator (last 100 block headers from chain head) + ↓ +Check downloadBlockBodies flag + ├─→ true: Send GetBlocksMessage (requests block bodies) + └─→ false: Send GetHeadersMessage (requests headers only) + ↓ +Peer responds with InvMessage (up to 500 blocks) + ↓ +Peer.processInv() → Send GetDataMessage + ├─→ If useFilteredBlocks: Request MSG_FILTERED_BLOCK + └─→ Else: Request MSG_BLOCK + ↓ +Peer sends FilteredBlock or Block + ↓ +Process and add to chain + ↓ +If orphan detected → blockChainDownloadLocked(orphanRoot.hash) + ↓ +Repeat until synchronized +``` + +**Duplicate Request Prevention:** + +The variables `lastGetBlocksBegin` and `lastGetBlocksEnd` track the most recent `getblocks`/`getheaders` request to avoid redundant requests: + +```java +if (Objects.equals(lastGetBlocksBegin, chainHeadHash) && + Objects.equals(lastGetBlocksEnd, toHash)) { + log.info("blockChainDownloadLocked({}): ignoring duplicated request", toHash); + return; +} +``` + +### 4. Filtered Block Processing + +When a `MerkleBlockMessage` arrives (`Peer.processFilteredBlock()`): + +``` +Start: FilteredBlock received + ↓ +Peer.startFilteredBlock(filteredBlock) + ├─→ Set currentFilteredBlock + └─→ Initialize matching transactions list + ↓ +Receive matching transactions + ↓ +Peer.endFilteredBlock(filteredBlock) + ├─→ Check for filter exhaustion + ├─→ If exhausted: + │ ├─→ Add block hash to awaitingFreshFilter + │ ├─→ Discard block + │ └─→ Wait for new filter (restart via setBloomFilter()) + └─→ Else: Add block to chain + ↓ +Invoke onBlocksDownloaded listeners + ↓ +Check if more blocks needed → blockChainDownloadLocked() +``` + +### 5. Block Locator Construction + +The block locator is a critical component for efficient sync: + +``` +Build locator starting from chain head: + Add last 100 blocks sequentially + Then exponential backoff: + step *= 2 each iteration + Add block at (height - step) + Continue until genesis block +Result: [head, head-1, ..., head-99, head-101, head-105, ..., genesis] +``` + +This structure allows peers to find the common ancestor quickly, even after long disconnections. + +### 6. Protocol Messages Flow + +**BIP37 Message Sequence:** + +``` +Client (DashJ) Peer (Dash Core Node) + | | + |--- FilterLoadMessage ----------------->| + | (bloom filter parameters) | + | | + |--- MemPoolMessage -------------------->| + | (optional, query mempool) | + | | + |<-- FilteredBlock or Inv ---------------| + | (matching unconfirmed txs) | + | | + |--- GetBlocksMessage ------------------>| + | (block locator, stop hash) | + | | + |<-- InvMessage -------------------------| + | (up to 500 block hashes) | + | | + |--- GetDataMessage --------------------->| + | (MSG_FILTERED_BLOCK for each hash) | + | | + |<-- MerkleBlockMessage -----------------| + | (block header + partial merkle tree)| + | | + |<-- TxMessage ---------------------------| + | (matching transactions) | + | | + |--- GetBlocksMessage ------------------>| + | (continue download) | + | | + [Repeat until chain synchronized] +``` + +--- + +## Fast Catchup Optimization + +Fast catchup allows wallets to skip downloading full block data for blocks created before the wallet's earliest key creation time. + +### Configuration + +Set via `Peer.setDownloadParameters(long fastCatchupTimeSecs, boolean useFilteredBlocks)`: + +```java +public void setDownloadParameters(long secondsSinceEpoch, boolean useFilteredBlocks) { + lock.lock(); + try { + this.fastCatchupTimeSecs = secondsSinceEpoch; + this.useFilteredBlocks = useFilteredBlocks; + } finally { + lock.unlock(); + } +} +``` + +### Fast Catchup Process + +1. **Initial State:** + - `downloadBlockBodies = false` (header-only mode) + - `fastCatchupTimeSecs` set to earliest wallet key time - 1 week + +2. **During Header Download:** + - `Peer.blockChainHeaderDownloadLocked()` sends `GetHeadersMessage` + - Headers processed in `processHeadersMessage()` + - Block bodies NOT downloaded + +3. **Transition to Full Download:** + + When a header's timestamp exceeds `fastCatchupTimeSecs`: + + ```java + if (header.getTimeSeconds() >= fastCatchupTimeSecs) { + log.info("Passed the fast catchup time ({})...", + Utils.dateTimeFormat(fastCatchupTimeSecs * 1000)); + this.downloadBlockBodies = true; + this.lastGetBlocksBegin = Sha256Hash.ZERO_HASH; // Prevent duplicate detection + blockChainDownloadLocked(Sha256Hash.ZERO_HASH); + return; + } + ``` + +4. **Full Block Download:** + - Switch to `GetBlocksMessage` (full blocks or filtered blocks) + - Process transactions matching bloom filter + +### Benefits + +- **Reduced bandwidth**: Headers are ~80 bytes vs. full blocks (can be MBs) +- **Faster initial sync**: Skip irrelevant historical data +- **Lower storage**: Don't store transactions before wallet creation + +--- + +## Filter Exhaustion Handling + +Filter exhaustion occurs when a wallet generates new keys during sync, making the current bloom filter incomplete. + +### Detection + +Implemented in `Peer.checkForFilterExhaustion(FilteredBlock m)`: + +```java +private boolean checkForFilterExhaustion(FilteredBlock m) { + for (Wallet wallet : wallets) { + if (wallet.checkForFilterExhaustion(m)) { + return true; + } + } + return false; +} +``` + +Wallet checks if: +- New keys were added since filter was sent +- Received block might contain transactions for those new keys + +### Handling Process + +When exhaustion detected: + +``` +Filter Exhaustion Detected + ↓ +Set awaitingFreshFilter = new LinkedList<>() + ↓ +Add current block hash to awaitingFreshFilter + ↓ +Drain all orphan blocks and add to awaitingFreshFilter + ↓ +Discard current and pending blocks + ↓ +Wait for new filter... + ↓ +PeerGroup.recalculateFastCatchupAndFilter(SEND_IF_CHANGED) + ↓ +Peer.setBloomFilter(newFilter) + ├─→ Send FilterLoadMessage + └─→ Call maybeRestartChainDownload() + ↓ + Send ping/pong to ensure filter applied + ↓ + blockChainDownloadLocked() to re-request awaiting blocks +``` + +### Critical Implementation Details + +From `Peer.setBloomFilter()`: + +```java +public void setBloomFilter(BloomFilter filter, boolean andQueryMemPool) { + checkNotNull(filter, "Clearing filters is not currently supported"); + final VersionMessage version = vPeerVersionMessage; + checkNotNull(version, "Cannot set filter before version handshake is complete"); + + if (version.isBloomFilteringSupported()) { + vBloomFilter = filter; + sendMessage(filter.toBloomFilterMessage()); + + if (andQueryMemPool) + sendMessage(new MemoryPoolMessage()); + + // Ping/pong to wait for filter to be applied + ListenableFuture future = ping(); + Futures.addCallback(future, new FutureCallback() { + @Override + public void onSuccess(Long result) { + maybeRestartChainDownload(); + } + // ... + }); + } +} +``` + +The ping/pong ensures the remote peer has applied the new filter before resuming download, preventing missed transactions. + +--- + +## Thread Safety + +### Locking Strategy + +Both `Peer` and `PeerGroup` use `ReentrantLock` for thread safety: + +```java +// Peer.java +protected final ReentrantLock lock = Threading.lock("peer"); + +// PeerGroup.java +protected final ReentrantLock lock = Threading.lock("peergroup"); +``` + +### Guarded Variables + +**Peer:** +- `downloadBlockBodies` - Controls header vs. body download +- `fastCatchupTimeSecs` - Fast catchup timestamp +- `awaitingFreshFilter` - Blocks awaiting filter recalculation +- `lastGetBlocksBegin/End` - Duplicate request tracking + +**PeerGroup:** +- `downloadPeer` - Currently selected download peer +- `downloadListener` - Chain download event listener +- `inactives` - Queue of inactive peer addresses +- `backoffMap` - Exponential backoff for failed connections + +### Thread-Safe Collections + +- `CopyOnWriteArrayList` for listener lists (both classes) +- `CopyOnWriteArrayList peers` in PeerGroup +- `CopyOnWriteArrayList wallets` in Peer + +### Executor Usage + +PeerGroup uses a `ListeningScheduledExecutorService` to serialize operations that: +- Access user-provided code (wallet listeners) +- Require ordering relative to other jobs +- Avoid lock contention with user code + +Example: +```java +executor.submit(new Runnable() { + @Override + public void run() { + recalculateFastCatchupAndFilter(FilterRecalculateMode.SEND_IF_CHANGED); + } +}); +``` + +--- + +## Key Synchronization Methods Reference + +### Peer Methods + +| Method | Purpose | Thread Safety | +|--------|---------|---------------| +| `startBlockChainDownload()` | Initiates async block chain download | Thread-safe via lock | +| `blockChainDownloadLocked(Sha256Hash)` | Core download method, sends getblocks/getheaders | Requires lock held | +| `setBloomFilter(BloomFilter, boolean)` | Sets bloom filter and optionally queries mempool | Thread-safe | +| `setDownloadParameters(long, boolean)` | Configures fast catchup time and filtered blocks | Thread-safe via lock | +| `setDownloadData(boolean)` | Enables/disables data download from peer | Volatile variable | +| `checkForFilterExhaustion(FilteredBlock)` | Checks if filter needs recalculation | Called under lock | +| `maybeRestartChainDownload()` | Restarts download after filter update | Thread-safe via ping/pong | + +### PeerGroup Methods + +| Method | Purpose | Thread Safety | +|--------|---------|---------------| +| `startBlockChainDownload(PeerDataEventListener)` | Registers listener for chain download | Thread-safe via lock | +| `startBlockChainDownloadFromPeer(Peer)` | Selects peer and initiates sync | Requires lock held | +| `recalculateFastCatchupAndFilter(FilterRecalculateMode)` | Merges filters and broadcasts | Async via executor | +| `setBloomFilterFalsePositiveRate(double)` | Updates FP rate and recalculates | Thread-safe via lock | +| `addWallet(Wallet)` | Registers wallet as filter provider | Thread-safe via lock | +| `removeWallet(Wallet)` | Unregisters wallet | Thread-safe via lock | +| `handleNewPeer(Peer)` | Sets up new peer with current filter | Thread-safe via lock | + +--- + +## Summary + +The DashJ blockchain synchronization process implements a sophisticated multi-protocol approach combining: + +### DIP-16 Headers-First Synchronization +1. **Multi-Stage Sync**: Six-stage process (OFFLINE → HEADERS → MNLIST → PREBLOCKS → BLOCKS → COMPLETE) +2. **Headers-First Download**: Quickly establish blockchain height with ~80 byte headers +3. **Masternode List Sync**: Download deterministic masternode lists and LLMQ quorums before blocks +4. **LLMQ Integration**: Synchronize Long-Living Masternode Quorums for InstantSend and ChainLock validation +5. **Checkpoint Security**: Hardcoded checkpoints protect against deep fork attacks during initial sync +6. **Event-Driven Transitions**: Future-based callbacks coordinate stage progression +7. **Progress Tracking**: Weighted progress calculation across all sync stages (30% headers, 5% mnlist, 5% preblocks, 60% blocks) + +### BIP37 Bloom Filtering +1. **Filter Management**: PeerGroup merges bloom filters from all wallets via FilterMerger and distributes to peers +2. **Privacy-Preserving**: Configurable false positive rate maintains privacy while reducing bandwidth +3. **Dynamic Recalculation**: Automatic filter updates when keys added or false positive rate exceeds threshold +4. **Filter Exhaustion Handling**: Detects and recovers when new keys generated during sync +5. **Stair-Stepping**: Element count rounded up by 100 to reduce filter regeneration frequency + +### Core Synchronization Features +1. **Download Coordination**: PeerGroup selects download peer and orchestrates multi-stage sync +2. **Protocol Execution**: Peer executes download using getblocks/getheaders with block locators +3. **Fast Catchup**: Headers-only download for old blocks, switching to filtered blocks after wallet creation time +4. **Dual Chain Support**: Separate headerChain and blockChain for efficient headers-first sync +5. **Thread Safety**: Comprehensive locking strategy and executor-based serialization + +### Dash-Specific Features +- **Masternode List**: Incremental sync via SimplifiedMasternodeListDiff (mnlistdiff messages) +- **LLMQ Quorums**: Quorum commitments with BLS signature validation +- **ChainLock Validation**: Verify ChainLock signatures using synchronized quorum data +- **InstantSend Support**: Validate InstantSend locks against active quorum state +- **Governance Integration**: Support for masternode governance proposals and votes + +This architecture provides bandwidth-efficient blockchain synchronization while maintaining privacy through bloom filters, supporting Dash-specific features (masternodes, quorums, InstantSend, ChainLocks), and enabling dynamic wallet key generation. + +--- + +## References + +### Specifications +- **BIP37**: [Connection Bloom filtering](https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki) +- **DIP-16**: [Headers-First Synchronization](https://github.com/dashpay/dips/blob/master/dip-0016.md) +- **DIP-3**: [Deterministic Masternode Lists](https://github.com/dashpay/dips/blob/master/dip-0003.md) +- **DIP-4**: [Simplified Verification of Deterministic Masternode Lists](https://github.com/dashpay/dips/blob/master/dip-0004.md) +- **DIP-6**: [Long-Living Masternode Quorums](https://github.com/dashpay/dips/blob/master/dip-0006.md) + +### Core Source Files +- `core/src/main/java/org/bitcoinj/core/Peer.java` - Individual peer connection and sync execution +- `core/src/main/java/org/bitcoinj/core/PeerGroup.java` - Peer pool management and sync coordination +- `core/src/main/java/org/bitcoinj/net/FilterMerger.java` - Bloom filter merging +- `core/src/main/java/org/bitcoinj/core/BloomFilter.java` - BIP37 bloom filter implementation +- `core/src/main/java/org/bitcoinj/core/FilteredBlock.java` - Merkle block with partial merkle tree + +### Dash-Specific Source Files +- `core/src/main/java/org/bitcoinj/evolution/SimplifiedMasternodeListDiff.java` - Masternode list diff +- `core/src/main/java/org/bitcoinj/evolution/QuorumState.java` - LLMQ quorum state management +- `core/src/main/java/org/bitcoinj/evolution/QuorumRotationState.java` - Quorum rotation handling +- `core/src/main/java/org/bitcoinj/quorums/LLMQUtils.java` - LLMQ utilities +- `core/src/main/java/org/bitcoinj/core/MasternodeSync.java` - Masternode sync state management +- `core/src/main/java/org/bitcoinj/core/listeners/DownloadProgressTracker.java` - Multi-stage progress tracking \ No newline at end of file From d29eaf710cd27a1603f33cb79862617c93a42137 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Mon, 12 Jan 2026 12:06:23 -0800 Subject: [PATCH 2/9] docs: add peer-networking-threading-model.md --- designdocs/peer-networking-threading-model.md | 348 ++++++++++++++++++ 1 file changed, 348 insertions(+) create mode 100644 designdocs/peer-networking-threading-model.md diff --git a/designdocs/peer-networking-threading-model.md b/designdocs/peer-networking-threading-model.md new file mode 100644 index 000000000..71e3ebaff --- /dev/null +++ b/designdocs/peer-networking-threading-model.md @@ -0,0 +1,348 @@ +# Peer Networking Threading Model + +## Overview + +This document describes the threading architecture used for peer-to-peer network communication in dashj. Understanding this model is critical for performance analysis, especially when dealing with concurrent peer connections and large data transfers like blockchain synchronization. + +## Current Architecture: Single-Threaded NIO + +### Summary + +dashj uses **Java NIO (Non-blocking I/O) with a single I/O thread** to handle network communication for ALL peer connections simultaneously. This means: + +- ✅ One thread can efficiently manage many concurrent TCP connections +- ❌ Message processing from multiple peers is **serialized** (not parallel) +- ❌ Large data transfers from different peers **cannot happen concurrently** + +### Key Components + +#### 1. NioClientManager (Single I/O Thread) + +**File**: `core/src/main/java/org/bitcoinj/net/NioClientManager.java` + +The `NioClientManager` class is responsible for all network I/O operations: + +```java +public class NioClientManager extends AbstractExecutionThreadService implements ClientConnectionManager { + private final Selector selector; + + @Override + public void run() { + Thread.currentThread().setPriority(Thread.MIN_PRIORITY); + while (isRunning()) { + // Register new connections + PendingConnect conn; + while ((conn = newConnectionChannels.poll()) != null) { + SelectionKey key = conn.sc.register(selector, SelectionKey.OP_CONNECT); + key.attach(conn); + } + + // Wait for events from ANY peer connection + selector.select(); + + // Process events from ALL peers in sequence + Iterator keyIterator = selector.selectedKeys().iterator(); + while (keyIterator.hasNext()) { + SelectionKey key = keyIterator.next(); + keyIterator.remove(); + handleKey(key); // Process one peer's event at a time + } + } + } +} +``` + +**Key Points**: +- Line 116: `selector.select()` - Waits for network events from ANY peer +- Line 118-122: Iterates through events sequentially +- Line 122: `handleKey(key)` - Processes each peer's data one at a time +- **This entire loop runs in a SINGLE thread named "NioClientManager"** + +#### 2. PeerSocketHandler (Message Deserialization) + +**File**: `core/src/main/java/org/bitcoinj/core/PeerSocketHandler.java` + +Message deserialization happens in the same I/O thread: + +```java +@Override +public int receiveBytes(ByteBuffer buff) { + while (true) { + if (largeReadBuffer != null) { + // Continue reading a large message + int bytesToGet = Math.min(buff.remaining(), largeReadBuffer.length - largeReadBufferPos); + buff.get(largeReadBuffer, largeReadBufferPos, bytesToGet); + largeReadBufferPos += bytesToGet; + if (largeReadBufferPos == largeReadBuffer.length) { + processMessage(serializer.deserializePayload(header, ByteBuffer.wrap(largeReadBuffer))); + // ... + } + } + // Deserialize messages from buffer + message = serializer.deserialize(buff); + processMessage(message); + } +} +``` + +**Key Points**: +- Deserialization happens synchronously in the I/O thread +- Large messages (blocks, etc.) are buffered but still processed sequentially +- While processing Peer A's large block, Peer B's data waits + +#### 3. Peer (Message Processing) + +**File**: `core/src/main/java/org/bitcoinj/core/Peer.java` + +The `Peer` class processes messages, but has some async handling: + +```java +public class Peer extends PeerSocketHandler { + protected final ReentrantLock lock = Threading.lock("peer"); + protected final ListeningScheduledExecutorService executor; + + @Override + protected void processMessage(Message m) throws Exception { + // Most message processing happens in the I/O thread + // But some operations are dispatched to executors + } +} +``` + +#### 4. PeerGroup (Coordination Thread) + +**File**: `core/src/main/java/org/bitcoinj/core/PeerGroup.java` + +```java +protected ListeningScheduledExecutorService createPrivateExecutor() { + ListeningScheduledExecutorService result = MoreExecutors.listeningDecorator( + new ScheduledThreadPoolExecutor(1, new ContextPropagatingThreadFactory("PeerGroup Thread")) + ); + return result; +} +``` + +**Key Points**: +- Single-threaded executor for peer management tasks +- Handles peer discovery, connection attempts, backoff logic +- Does NOT handle network I/O + +#### 5. Threading.USER_THREAD (Event Listener Dispatch) + +**File**: `core/src/main/java/org/bitcoinj/utils/Threading.java` + +```java +public static class UserThread extends Thread implements Executor { + private LinkedBlockingQueue tasks; + + public UserThread() { + super("dashj user thread"); + setDaemon(true); + tasks = new LinkedBlockingQueue<>(); + start(); + } +} +``` + +**Key Points**: +- Single thread for dispatching event listeners +- Event listeners registered with `Threading.USER_THREAD` run here +- Prevents holding locks when calling user code + +## Thread Summary + +| Thread Name | Count | Purpose | Handles Peer I/O? | +|-------------|-------|---------|-------------------| +| NioClientManager | 1 | All network I/O for all peers | ✅ YES (all of it) | +| PeerGroup Thread | 1 | Peer management, discovery, connection logic | ❌ No | +| dashj user thread | 1 | Event listener dispatch | ❌ No | + +## How to Verify + +### 1. Thread Dump Analysis + +Add this code to capture thread information: + +```java +Threading.dump(); // Prints all thread stacks +``` + +You'll see thread names like: +- `NioClientManager` - Single thread handling all I/O +- `PeerGroup Thread` - Single thread for coordination +- `dashj user thread` - Single thread for callbacks + +### 2. Enable Debug Logging + +Add logging to `NioClientManager.handleKey()`: + +```java +log.debug("Thread {} processing peer event", Thread.currentThread().getName()); +``` + +You'll always see the same thread name regardless of how many peers are connected. + +### 3. Performance Testing + +Connect to multiple peers and observe: +- CPU usage on the NioClientManager thread +- Time spent in `receiveBytes()` and `processMessage()` +- Queue buildup in `selector.selectedKeys()` + +## Performance Implications + +### Advantages + +1. **Low memory overhead**: No thread-per-connection +2. **Efficient for many small messages**: NIO selector efficiently waits on multiple sockets +3. **Good for idle connections**: No thread blocked waiting on each connection +4. **No context switching overhead**: Single thread avoids thread context switches + +### Disadvantages + +1. **Serialized large data transfers**: + - When downloading a 2MB block from Peer A, Peer B's messages wait + - Cannot utilize multiple CPU cores for message deserialization + - Network bandwidth underutilized if processing is CPU-bound + +2. **Head-of-line blocking**: + - Slow peer can delay processing of fast peers + - CPU-intensive message processing blocks network I/O + +3. **No parallelism for multiple peer downloads**: + - Initial block download from multiple peers is sequential, not parallel + - Cannot take advantage of multi-core CPUs + +### Measurement Points + +To measure the impact: + +```java +// In PeerSocketHandler.receiveBytes() +long start = System.nanoTime(); +message = serializer.deserialize(buff); +processMessage(message); +long duration = System.nanoTime() - start; +if (duration > 10_000_000) { // > 10ms + log.warn("Message processing took {}ms, blocking other peers", duration / 1_000_000); +} +``` + +## Alternative Architectures + +### Option 1: Message Processing Thread Pool (Recommended) + +Keep NIO for I/O, but dispatch message processing to a thread pool: + +```java +// In PeerSocketHandler +private static final ExecutorService messageProcessor = + Executors.newFixedThreadPool(Runtime.getRuntime().availableProcessors()); + +@Override +public int receiveBytes(ByteBuffer buff) { + message = serializer.deserialize(buff); + + // Dispatch to thread pool instead of processing inline + messageProcessor.execute(() -> { + try { + processMessage(message); + } catch (Exception e) { + exceptionCaught(e); + } + }); +} +``` + +**Pros**: +- Parallel message processing from multiple peers +- I/O thread stays responsive +- Scales with CPU cores + +**Cons**: +- More complex synchronization needed +- Ordering guarantees may be lost +- Increased memory usage + +### Option 2: Multiple NIO Threads (Advanced) + +Shard peers across multiple NIO threads: + +```java +// Create N NioClientManager instances +List managers = new ArrayList<>(); +for (int i = 0; i < numThreads; i++) { + managers.add(new NioClientManager()); +} + +// Assign peers to managers using round-robin or hashing +int managerIndex = peerAddress.hashCode() % managers.size(); +managers.get(managerIndex).openConnection(address, connection); +``` + +**Pros**: +- True parallel I/O +- Better CPU utilization +- Natural load balancing + +**Cons**: +- More complex architecture +- Need to manage multiple selectors +- Harder to debug + +### Option 3: Blocking I/O with Thread-per-Peer + +Use traditional blocking I/O with a thread pool: + +```java +ExecutorService peerThreadPool = Executors.newCachedThreadPool(); + +for (PeerAddress addr : peers) { + peerThreadPool.execute(() -> { + try (Socket socket = new Socket(addr.getAddr(), addr.getPort())) { + InputStream in = socket.getInputStream(); + while (true) { + Message msg = serializer.deserialize(in); + processMessage(msg); + } + } + }); +} +``` + +**Pros**: +- Simplest model +- True parallelism +- Familiar programming model + +**Cons**: +- Higher memory usage (1 thread + stack per peer) +- More context switching +- Doesn't scale to thousands of peers + +## Related Files + +- `core/src/main/java/org/bitcoinj/net/NioClientManager.java` - Main I/O loop +- `core/src/main/java/org/bitcoinj/net/ConnectionHandler.java` - Per-connection state +- `core/src/main/java/org/bitcoinj/core/PeerSocketHandler.java` - Message serialization +- `core/src/main/java/org/bitcoinj/core/Peer.java` - Peer logic +- `core/src/main/java/org/bitcoinj/core/PeerGroup.java` - Peer management +- `core/src/main/java/org/bitcoinj/utils/Threading.java` - Threading utilities + +## Recommendations + +For dashj optimization, consider: + +1. **Profile first**: Measure actual bottlenecks with profiling tools +2. **Measure I/O wait time**: Is the thread blocked on network I/O or CPU processing? +3. **Consider Option 1** (message processing thread pool) for: + - Large blocks during initial sync + - CPU-intensive message processing (signature verification) +4. **Keep current model** if: + - Most time is spent in network I/O (not CPU) + - Message processing is fast (< 1ms per message) + - Memory is constrained + +## Conclusion + +The current single-threaded NIO architecture is efficient for managing many connections with small messages, but serializes large data transfers and message processing. For blockchain sync optimization, consider moving message processing to a thread pool while keeping the NIO model for I/O. \ No newline at end of file From 6e6eab514232439b6870717406d669b63931fa1c Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Mon, 12 Jan 2026 12:06:35 -0800 Subject: [PATCH 3/9] docs: add proposals --- .../network-optimization-strategies.md | 1434 +++++++++++++++++ designdocs/proposals/reverse-sync.md | 1034 ++++++++++++ 2 files changed, 2468 insertions(+) create mode 100644 designdocs/proposals/network-optimization-strategies.md create mode 100644 designdocs/proposals/reverse-sync.md diff --git a/designdocs/proposals/network-optimization-strategies.md b/designdocs/proposals/network-optimization-strategies.md new file mode 100644 index 000000000..f9eceeddd --- /dev/null +++ b/designdocs/proposals/network-optimization-strategies.md @@ -0,0 +1,1434 @@ +# Network Optimization Strategies for Blockchain Sync + +## Performance Analysis Summary + +Based on Android rescan timing data (1,391,682 blocks synced in 2887.47s): + +``` +Total Time: 2887.47s (100%) + +Time Distribution: +├─ Pure Network Wait: ~2010s (70%) ← PRIMARY BOTTLENECK +│ ├─ Time to First Block: 403.74s (14%) +│ └─ Inter-Block Gaps (net): ~2010s (56%) +├─ Block Processing: 794.05s (27%) +│ ├─ Wallet Updates: 413.50s (14%) +│ ├─ Filter Checks: 89.16s (3%) +│ ├─ Disk I/O: 16.49s (0.6%) +│ └─ Other: 275s (9%) +└─ Other overhead: ~83s (3%) +``` + +### Key Metrics +- **Total Network Wait Time**: 3208.35s (includes processing time) +- **Pure Network Wait**: ~2010s (70% of total sync time) +- **Time to First Block**: 403.74s (144.60ms avg × 2,792 requests) +- **Inter-Block Gaps**: 2804.61s (2.02ms avg × 1,391,400 blocks) +- **GetBlocksMessage Count**: 2,792 requests +- **Average Batch Size**: 498 blocks per request +- **Message Deserialization**: 55.39s (1.9% - not a bottleneck) +- **Disk I/O**: 16.49s (0.6% - not a bottleneck) + +## Root Causes of Network Wait Time + +### 1. Request Latency (403.74s total) +- **144.60ms average latency** per GetBlocksMessage +- **2,792 round-trip requests** required for full sync +- Causes: + - High network latency to peer + - Peer processing delay + - Small batch sizes requiring many round-trips + - No request pipelining (sequential requests) + +### 2. Inter-Block Streaming Delays (~2010s total) +- **2.02ms average gap** between consecutive blocks +- Accumulates to massive time: 2.02ms × 1,391,400 = 2,808s +- Causes: + - Network bandwidth constraints + - TCP congestion control overhead + - Peer upload bandwidth limits + - Small TCP window sizes + - Single-peer sequential download + +## Optimization Strategies + +--- + +## Priority 1: GetBlocksMessage Request Pipelining ⭐⭐⭐ + +### Problem +Current sequential flow: +1. Send GetBlocksMessage +2. Wait 144ms for first block +3. Process entire batch (500 blocks) +4. Send next GetBlocksMessage +5. Repeat + +This causes **403.74s wasted** waiting for first block in each batch. + +### Solution +Send next GetBlocksMessage **before** finishing current batch: +- When at block 400 of 500 in current batch +- Send next GetBlocksMessage NOW +- By the time we finish block 500, blocks 501+ are already arriving + +### Implementation Location +**File**: `core/src/main/java/org/bitcoinj/core/Peer.java` + +**Current Code** (around line 1757-1784): +```java +void blockChainDownloadLocked(Sha256Hash toHash) { + // ... existing code ... + + if (downloadBlockBodies) { + GetBlocksMessage message = new GetBlocksMessage(params, blockLocator, toHash); + sendMessage(message); +// log.info("[NETWORK-IO] Sent GetBlocksMessage (from={}, to={})", +// chainHeadHash, toHash); + } +} +``` + +**Proposed Changes**: + +1. Add field to track blocks remaining in current batch: +```java +@GuardedBy("lock") +private int blocksRemainingInBatch = 0; +@GuardedBy("lock") +private boolean nextBatchRequested = false; +``` + +2. In `blockChainDownloadLocked()`, track batch size: +```java +void blockChainDownloadLocked(Sha256Hash toHash) { + // ... existing code ... + + lock.lock(); + try { + blocksRemainingInBatch = estimatedBlocksToRequest; // typically 500 + nextBatchRequested = false; + } finally { + lock.unlock(); + } + + if (downloadBlockBodies) { + GetBlocksMessage message = new GetBlocksMessage(params, blockLocator, toHash); + sendMessage(message); + } +} +``` + +3. In `endFilteredBlock()`, implement pipelining logic: +```java +@Override +protected void endFilteredBlock(FilteredBlock m) throws VerificationException { + // ... existing block processing code ... + + // PIPELINING OPTIMIZATION: Request next batch early + lock.lock(); + try { + blocksRemainingInBatch--; + + // When 20% of blocks remain, request next batch + // Adjust threshold based on processing speed + if (blocksRemainingInBatch > 0 && + blocksRemainingInBatch < 100 && // 20% of typical 500 block batch + !nextBatchRequested && + blockChain.getBestChainHeight() < vDownloadData.lastBlock) { + + nextBatchRequested = true; + + // Request next batch in background + Threading.THREAD_POOL.execute(() -> { + try { + blockChainDownload(Sha256Hash.ZERO_HASH); + } catch (Exception e) { + log.error("Error requesting next block batch", e); + } + }); + + log.info("[PIPELINE] Requesting next batch early ({} blocks remaining in current batch)", + blocksRemainingInBatch); + } + } finally { + lock.unlock(); + } +} +``` + +### Expected Impact +- **Eliminate ~350-400s** of Time to First Block latency +- Overlap network latency with block processing +- Reduce total sync time by **12-14%** + +### Risks & Considerations +- May receive blocks out of order (need proper sequencing) +- Potential memory increase (buffering blocks from next batch) +- Need to handle errors in pipelined requests +- May need to adjust pipeline threshold based on processing speed + +--- + +## Priority 2: Increase Batch Size ⭐⭐⭐ + +### Problem +- Current: ~498 blocks per GetBlocksMessage +- Requires 2,792 round-trips for 1.39M blocks +- Each round-trip costs 144.60ms + +### Solution +Increase blocks requested per GetBlocksMessage to reduce round-trips. + +### Implementation Location +**File**: `core/src/main/java/org/bitcoinj/core/Peer.java` + +**Current Code**: +```java +// GetBlocksMessage typically requests up to 500 blocks +// This is limited by MAX_INV_SIZE in the protocol +``` + +**Protocol Constraint**: +- Bitcoin/Dash protocol limits `inv` messages to 50,000 items +- FilteredBlock downloads are limited by this +- Current conservative limit: ~500 blocks + +**Proposed Changes**: + +1. Investigate actual protocol limits: +```java +// Check GetBlocksMessage.java for max locator size +// Verify peer responds with more blocks if requested +``` + +2. If possible, increase batch size: +```java +// In blockChainDownloadLocked() +private static final int BLOCKS_PER_REQUEST = 2000; // Increase from 500 + +// Request larger ranges +GetBlocksMessage message = new GetBlocksMessage(params, blockLocator, toHash); +// Ensure block locator spans appropriate range for larger batches +``` + +3. Alternative: Request multiple ranges in parallel: +```java +// Request blocks 0-500, 500-1000, 1000-1500 simultaneously +// Process in order as they arrive +``` + +### Expected Impact +- **Doubling to 1,000 blocks**: Save ~200s (1,396 fewer requests × 144ms) +- **Increasing to 2,000 blocks**: Save ~300s (2,094 fewer requests × 144ms) +- Reduce total sync time by **7-10%** + +### Risks & Considerations +- May exceed protocol limits (need testing) +- Larger batches increase memory usage +- May timeout on slow peers +- Need to verify peer compatibility + +--- + +## Priority 3: Optimize TCP Parameters ⭐⭐ + +### Problem +- 2.02ms average inter-block gap +- Accumulates to 2,808s across 1.39M blocks +- Likely caused by suboptimal TCP settings + +### Solution +Optimize TCP socket parameters for high-throughput block streaming. + +### Implementation Location +**File**: `core/src/main/java/org/bitcoinj/net/NioClientManager.java` or socket creation code + +**Proposed Changes**: + +```java +// When creating socket connection to peer +Socket socket = new Socket(); + +// 1. Disable Nagle's algorithm for lower latency +socket.setTcpNoDelay(true); + +// 2. Increase receive buffer for better throughput +// Default is often 64KB, increase to 256KB-1MB +socket.setReceiveBufferSize(512 * 1024); // 512KB + +// 3. Increase send buffer +socket.setSendBufferSize(256 * 1024); // 256KB + +// 4. Enable TCP keep-alive to prevent connection drops +socket.setKeepAlive(true); + +// 5. Optimize socket timeout +// Balance between responsiveness and allowing slow peers +socket.setSoTimeout(30000); // 30 seconds + +// 6. (Advanced) Set traffic class for QoS if supported +try { + // IPTOS_THROUGHPUT (0x08) - optimize for throughput + socket.setTrafficClass(0x08); +} catch (Exception e) { + // Not supported on all platforms +} +``` + +**Additional TCP Tuning** (may require system-level changes): +```java +// Document recommended OS-level TCP settings for users: +// +// Linux: +// sysctl -w net.ipv4.tcp_window_scaling=1 +// sysctl -w net.core.rmem_max=16777216 +// sysctl -w net.core.wmem_max=16777216 +// sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216" +// sysctl -w net.ipv4.tcp_wmem="4096 65536 16777216" +// +// Android: +// May have limited control, but can set socket buffer sizes +``` + +### Expected Impact +- Reduce inter-block gap from 2.02ms to 1.0-1.5ms +- Potential savings: **500-1000s** (depends on network conditions) +- Reduce total sync time by **17-35%** + +### Risks & Considerations +- Platform-specific behavior (Android vs desktop) +- May not help if bottleneck is peer upload speed +- Requires testing on various networks +- OS-level tuning requires root/admin access + +--- + +## Priority 4: Peer Performance Tracking & Selection ⭐⭐ + +### Problem +- All peers treated equally +- May be connected to slow peer +- No mechanism to identify and prefer fast peers + +### Solution +Track peer performance metrics and prefer faster peers. + +### Implementation Location +**File**: `core/src/main/java/org/bitcoinj/core/Peer.java` + +**Proposed Changes**: + +1. Add performance tracking fields: +```java +public class Peer { + // Performance metrics + @GuardedBy("lock") + private long totalBlocksReceived = 0; + @GuardedBy("lock") + private long totalBytesReceived = 0; + @GuardedBy("lock") + private long connectionStartTime = 0; + @GuardedBy("lock") + private double averageBlockLatency = 0.0; // ms + @GuardedBy("lock") + private double averageInterBlockGap = 0.0; // ms + + public double getDownloadThroughput() { + lock.lock(); + try { + long elapsed = System.currentTimeMillis() - connectionStartTime; + if (elapsed == 0) return 0; + return (totalBytesReceived * 1000.0) / elapsed; // bytes/sec + } finally { + lock.unlock(); + } + } + + public double getAverageBlockLatency() { + return averageBlockLatency; + } + + public PeerPerformanceMetrics getPerformanceMetrics() { + lock.lock(); + try { + return new PeerPerformanceMetrics( + totalBlocksReceived, + getDownloadThroughput(), + averageBlockLatency, + averageInterBlockGap + ); + } finally { + lock.unlock(); + } + } +} +``` + +2. In `PeerGroup`, implement peer ranking: +```java +public class PeerGroup { + /** + * Get the best performing peer for block downloads + */ + public Peer getBestDownloadPeer() { + lock.lock(); + try { + return peers.stream() + .filter(Peer::isDownloadPeer) + .max(Comparator.comparingDouble(peer -> + calculatePeerScore(peer.getPerformanceMetrics()))) + .orElse(null); + } finally { + lock.unlock(); + } + } + + private double calculatePeerScore(PeerPerformanceMetrics metrics) { + // Score formula (higher is better): + // - Prioritize throughput (bytes/sec) + // - Penalize high latency + // - Penalize high inter-block gaps + + double throughputScore = metrics.throughput / 1000.0; // Normalize + double latencyPenalty = -metrics.averageLatency / 100.0; + double gapPenalty = -metrics.averageInterBlockGap / 10.0; + + return throughputScore + latencyPenalty + gapPenalty; + } + + /** + * Periodically evaluate peers and switch to better peer if available + */ + private void evaluatePeerPerformance() { + Peer currentDownloadPeer = getDownloadPeer(); + Peer bestPeer = getBestDownloadPeer(); + + if (bestPeer != null && bestPeer != currentDownloadPeer) { + PeerPerformanceMetrics current = currentDownloadPeer.getPerformanceMetrics(); + PeerPerformanceMetrics best = bestPeer.getPerformanceMetrics(); + + // Switch if new peer is significantly better (>50% improvement) + if (calculatePeerScore(best) > calculatePeerScore(current) * 1.5) { + log.info("Switching download peer from {} to {} (score: {} -> {})", + currentDownloadPeer.getAddress(), + bestPeer.getAddress(), + calculatePeerScore(current), + calculatePeerScore(best)); + + setDownloadPeer(bestPeer); + } + } + } +} +``` + +3. Add periodic evaluation: +```java +// In PeerGroup constructor or startBlockChainDownload() +executor.scheduleAtFixedRate( + this::evaluatePeerPerformance, + 30, // Initial delay + 30, // Period + TimeUnit.SECONDS +); +``` + +### Expected Impact +- Automatically find fastest available peer +- Avoid slow peers proactively +- Potential savings: **Variable** (depends on peer quality difference) +- May reduce sync time by **10-30%** if better peers available + +### Risks & Considerations +- Switching peers may interrupt download +- Need minimum sample size before switching +- May cause peer churn +- Need to handle peer disconnections gracefully + +--- + +## Priority 5: Multi-Peer Parallel Downloads ⭐⭐ + +### Problem +- Currently downloads from single peer sequentially +- Not utilizing available bandwidth from multiple peers +- Single point of failure if peer disconnects + +### Solution +Download different block ranges from multiple peers simultaneously. + +### Threading Model Requirements + +**Critical Question**: Do peers run on separate threads? + +**Answer**: YES - For parallel downloads to work, each peer MUST use separate threads or async I/O for true concurrency. + +#### Current bitcoinj Architecture + +**Good News**: bitcoinj already handles this correctly! + +```java +// bitcoinj's existing architecture: +// +// 1. NioClientManager (or NioServer) handles network I/O +// - Uses Java NIO (non-blocking I/O) +// - Single selector thread multiplexes all connections +// - Each peer connection is asynchronous +// +// 2. Message processing happens on executor threads +// - When a message arrives, it's dispatched to a worker thread +// - Multiple peers can process messages concurrently +// +// 3. Each Peer object has its own lock +// - Thread-safe for concurrent message handling +``` + +**How Network I/O Works**: +```java +// NioClientManager (simplified) +class NioClientManager { + private Selector selector; // Single selector for all connections + private ExecutorService executor; // Thread pool for message processing + + public void run() { + while (true) { + // Wait for any connection to have data ready + selector.select(); + + // Process all connections with data + for (SelectionKey key : selector.selectedKeys()) { + if (key.isReadable()) { + // Data available from a peer + ConnectionHandler handler = (ConnectionHandler) key.attachment(); + + // Read data in selector thread (non-blocking) + ByteBuffer data = handler.readData(); + + // Dispatch message processing to executor thread + executor.execute(() -> { + processMessage(handler.peer, data); + }); + } + } + } + } +} +``` + +**Key Points**: +1. ✅ **Network I/O is already parallel**: All peer connections share one selector thread +2. ✅ **Message processing is already parallel**: Uses thread pool +3. ✅ **No changes needed**: Existing architecture supports concurrent downloads + +#### What Changes ARE Needed + +**1. Block Processing Coordination** (This is the new part!) + +The challenge is NOT network parallelism - that already works. +The challenge is **merging blocks from multiple peers in the correct order**. + +```java +// Without coordination (WRONG): +Peer A: Receives block 100 → processes immediately → adds to blockchain +Peer B: Receives block 5 → processes immediately → ERROR! Out of order! + +// With coordination (CORRECT): +Peer A: Receives block 100 → adds to merge queue +Peer B: Receives block 5 → adds to merge queue +Coordinator: Processes blocks in order: 5, 6, 7, ..., 100 +``` + +**2. Serialized Blockchain Updates** + +Even though network I/O is parallel, blockchain updates must be serialized: + +```java +// BlockChain.add() is already thread-safe (uses locks) +// But we need to ensure SEQUENTIAL processing + +synchronized (blockChain) { + // Only ONE thread can add blocks at a time + blockChain.add(block); +} +``` + +### Implementation Approach + +**High-Level Design**: +``` +Network I/O (Parallel): +├─ Peer A Thread: Downloads blocks 0-500,000 ┐ +├─ Peer B Thread: Downloads blocks 500,000-1,000,000├─ All concurrent +└─ Peer C Thread: Downloads blocks 1,000,000-1,391,682┘ + + ↓ (all threads write to merge queue) + +Merge Queue (Priority Queue): +- Blocks sorted by height +- Buffering for out-of-order arrivals + + ↓ + +Coordinator Thread (Sequential): +- Reads blocks in order from queue +- Adds to blockchain one at a time +- Updates wallet +``` + +### Implementation Location +**File**: `core/src/main/java/org/bitcoinj/core/PeerGroup.java` + +**Proposed Changes**: + +1. Create parallel download coordinator: +```java +public class ParallelBlockDownloader { + private final PeerGroup peerGroup; + private final AbstractBlockChain blockChain; + private final Map peerAssignments = new ConcurrentHashMap<>(); + private final BlockingQueue mergeQueue = new PriorityBlockingQueue<>( + 1000, + Comparator.comparingInt(block -> block.getBlockHeader().getHeight()) + ); + + public void startParallelDownload(List peers, int targetHeight) { + int peersCount = peers.size(); + int currentHeight = blockChain.getBestChainHeight(); + int blocksRemaining = targetHeight - currentHeight; + int blocksPerPeer = blocksRemaining / peersCount; + + // Assign block ranges to peers + for (int i = 0; i < peersCount; i++) { + Peer peer = peers.get(i); + int startHeight = currentHeight + (i * blocksPerPeer); + int endHeight = (i == peersCount - 1) + ? targetHeight + : startHeight + blocksPerPeer; + + BlockRange range = new BlockRange(startHeight, endHeight); + peerAssignments.put(peer, range); + + // Start download from this peer + peer.downloadBlockRange(range); + } + + // Start merge thread + Threading.THREAD_POOL.execute(this::mergeAndProcessBlocks); + } + + private void mergeAndProcessBlocks() { + int nextExpectedHeight = blockChain.getBestChainHeight() + 1; + Map buffer = new HashMap<>(); + + while (nextExpectedHeight <= targetHeight) { + try { + FilteredBlock block = mergeQueue.poll(1, TimeUnit.SECONDS); + if (block == null) continue; + + int blockHeight = block.getBlockHeader().getHeight(); + + if (blockHeight == nextExpectedHeight) { + // Process this block and any buffered sequential blocks + processBlock(block); + nextExpectedHeight++; + + // Process buffered blocks if sequential + while (buffer.containsKey(nextExpectedHeight)) { + processBlock(buffer.remove(nextExpectedHeight)); + nextExpectedHeight++; + } + } else if (blockHeight > nextExpectedHeight) { + // Buffer out-of-order block + buffer.put(blockHeight, block); + } else { + // Duplicate or old block, ignore + } + } catch (InterruptedException e) { + Thread.currentThread().interrupt(); + break; + } + } + } + + public void onBlockReceived(Peer peer, FilteredBlock block) { + // Add to merge queue + mergeQueue.offer(block); + } +} +``` + +2. Modify Peer to support range downloads: +```java +public class Peer { + public void downloadBlockRange(BlockRange range) { + lock.lock(); + try { + this.downloadStartHeight = range.startHeight; + this.downloadEndHeight = range.endHeight; + this.rangeDownloadMode = true; + } finally { + lock.unlock(); + } + + // Request first batch in range + Sha256Hash startHash = blockChain.getBlockHash(range.startHeight); + blockChainDownload(startHash); + } +} +``` + +### Expected Impact +- **With 2 peers**: Reduce sync time by **30-40%** +- **With 3 peers**: Reduce sync time by **50-60%** +- More resilient (peer disconnection doesn't stop entire sync) + +### Risks & Considerations +- **Complex implementation** - significant development effort +- Memory overhead (buffering out-of-order blocks) +- Potential for duplicate downloads if coordination fails +- Need to handle: + - Peer disconnections mid-download + - Slow peers (reassign their range) + - Block verification across ranges +- May overwhelm device resources on mobile + +--- + +## Priority 6: Headers-First Download with Parallel Body Fetching ⭐ + +### Problem +- Current BIP37 approach downloads filtered blocks sequentially +- Cannot parallelize easily because we don't know future block hashes + +### Solution +1. Download all block headers first (very fast - headers are only 80 bytes) +2. Once headers are known, fetch block bodies in parallel from multiple peers + +### Header Storage Challenge + +**Problem**: SPVBlockStore only maintains ~5000 recent headers. For 1.39M blocks, we need a different storage strategy. + +**Header Storage Requirements**: +- 1.39M headers × 80 bytes = **111 MB** of raw header data +- Plus indexes, metadata, and overhead +- Need fast random access by height and hash +- Must work on mobile devices with limited resources + +### Header Storage Options + +#### Option 1: Streaming Headers (Recommended for Mobile) ⭐⭐⭐ + +**Concept**: Don't store all headers permanently - just verify the chain as headers arrive, then discard old headers. + +**Implementation**: +```java +public class StreamingHeaderValidator { + private final NetworkParameters params; + private StoredBlock checkpoint; // Last known checkpoint + private StoredBlock currentTip; // Current chain tip + private LinkedList recentHeaders; // Keep last 5000 + + // Verify header chain without storing everything + public void processHeader(Block header) throws VerificationException { + // 1. Verify header connects to previous + verifyHeaderConnects(header); + + // 2. Verify proof of work + verifyProofOfWork(header); + + // 3. Update tip + currentTip = new StoredBlock(header, currentTip.getChainWork(), currentTip.getHeight() + 1); + + // 4. Add to recent headers (keep last 5000) + recentHeaders.addLast(currentTip); + if (recentHeaders.size() > 5000) { + recentHeaders.removeFirst(); + } + + // 5. Periodically save checkpoint + if (currentTip.getHeight() % 10000 == 0) { + checkpoint = currentTip; + saveCheckpoint(checkpoint); + } + } + + // After headers sync, we know: + // - Final chain tip (verified) + // - Last 5000 headers (in memory) + // - Checkpoints every 10K blocks (on disk) + + // This is enough to fetch block bodies +} +``` + +**Phase 1: Headers Download with Streaming** +```java +// Request all headers and verify as they arrive +StreamingHeaderValidator validator = new StreamingHeaderValidator(params); + +Sha256Hash startHash = blockChain.getChainHead().getHeader().getHash(); +Sha256Hash stopHash = Sha256Hash.ZERO_HASH; // Get all headers + +while (!validator.isFullySynced()) { + GetHeadersMessage request = new GetHeadersMessage(params, startHash, stopHash); + peer.sendMessage(request); + + // As headers arrive, validate and discard + List headers = waitForHeaders(); + for (Block header : headers) { + validator.processHeader(header); + } + + // Update start for next batch + startHash = validator.getCurrentTip().getHeader().getHash(); +} + +// Now we have verified chain tip and recent headers +// Can fetch bodies starting from our last stored block +``` + +**Pros**: +- ✅ Minimal memory usage (~400KB for 5000 headers) +- ✅ Minimal disk usage (checkpoints only) +- ✅ Perfect for mobile/Android +- ✅ Can resume from checkpoints on interruption + +**Cons**: +- ❌ Can't randomly access old headers +- ❌ Must fetch bodies sequentially from last stored block +- ❌ Limits parallelization (can only fetch forward from known blocks) + +--- + +#### Option 2: Temporary File-Backed Header Cache ⭐⭐ + +**Concept**: Store all headers temporarily in a memory-mapped file, discard after body sync completes. + +**Implementation**: +```java +public class TemporaryHeaderStore implements AutoCloseable { + private static final int HEADER_SIZE = 80; + private final File tempFile; + private final RandomAccessFile raf; + private final MappedByteBuffer buffer; + private final Map hashToOffset; + + public TemporaryHeaderStore(int estimatedHeaders) throws IOException { + // Create temp file + tempFile = File.createTempFile("headers-", ".tmp"); + tempFile.deleteOnExit(); + + // Map file to memory + raf = new RandomAccessFile(tempFile, "rw"); + long fileSize = (long) estimatedHeaders * HEADER_SIZE; + buffer = raf.getChannel().map(FileChannel.MapMode.READ_WRITE, 0, fileSize); + + hashToOffset = new HashMap<>(estimatedHeaders); + } + + public void storeHeader(int height, Block header) throws IOException { + int offset = height * HEADER_SIZE; + buffer.position(offset); + + byte[] headerBytes = header.bitcoinSerialize(); + buffer.put(headerBytes, 0, HEADER_SIZE); + + hashToOffset.put(header.getHash(), offset); + } + + public Block getHeader(int height) throws IOException { + int offset = height * HEADER_SIZE; + buffer.position(offset); + + byte[] headerBytes = new byte[HEADER_SIZE]; + buffer.get(headerBytes); + + return new Block(params, headerBytes); + } + + public Block getHeaderByHash(Sha256Hash hash) throws IOException { + Integer offset = hashToOffset.get(hash); + if (offset == null) return null; + + buffer.position(offset); + byte[] headerBytes = new byte[HEADER_SIZE]; + buffer.get(headerBytes); + + return new Block(params, headerBytes); + } + + @Override + public void close() { + buffer.clear(); + try { raf.close(); } catch (IOException e) {} + tempFile.delete(); + } +} +``` + +**Usage**: +```java +// Phase 1: Download and store all headers +try (TemporaryHeaderStore headerStore = new TemporaryHeaderStore(1_400_000)) { + // Download all headers + for (Block header : downloadAllHeaders()) { + headerStore.storeHeader(currentHeight, header); + currentHeight++; + } + + // Phase 2: Now fetch bodies in parallel using stored headers + ParallelBodyDownloader downloader = new ParallelBodyDownloader(headerStore); + downloader.downloadBodies(startHeight, endHeight, peers); + +} // Auto-cleanup temp file +``` + +**Pros**: +- ✅ Enables full parallelization (random access to any header) +- ✅ Memory-mapped I/O is fast +- ✅ Auto-cleanup on close +- ✅ ~111MB disk usage (reasonable) + +**Cons**: +- ❌ Requires 111MB temporary disk space +- ❌ Memory-mapped files may not work well on all Android versions +- ❌ Hash lookup requires in-memory HashMap (~50MB) + +--- + +#### Option 3: Sparse Header Storage with Checkpoints ⭐⭐⭐ + +**Concept**: Store checkpoints (every 2,016 blocks) + recent headers + headers we need for current download. + +**Implementation**: +```java +public class SparseHeaderStore { + private static final int CHECKPOINT_INTERVAL = 2016; // ~2 weeks of Bitcoin blocks + + // Permanent storage + private final Map checkpoints; // Every 2016 blocks + private final SPVBlockStore recentHeaders; // Last 5000 blocks + + // Temporary active range (for current parallel download) + private final Map activeRange; + private int activeRangeStart = 0; + private int activeRangeEnd = 0; + + public void downloadHeaders() { + int currentHeight = 0; + + while (currentHeight < targetHeight) { + List headers = requestHeaders(currentHeight); + + for (Block header : headers) { + // Always verify + verifyHeader(header); + + // Store checkpoint? + if (currentHeight % CHECKPOINT_INTERVAL == 0) { + checkpoints.put(currentHeight, new StoredBlock(header, work, currentHeight)); + } + + // Store recent? + if (targetHeight - currentHeight < 5000) { + recentHeaders.put(new StoredBlock(header, work, currentHeight)); + } + + currentHeight++; + } + } + } + + public void loadRangeForDownload(int startHeight, int endHeight) { + activeRange.clear(); + activeRangeStart = startHeight; + activeRangeEnd = endHeight; + + // Re-download just the headers we need for this range + List rangeHeaders = requestHeaders(startHeight, endHeight); + for (int i = 0; i < rangeHeaders.size(); i++) { + activeRange.put(startHeight + i, + new StoredBlock(rangeHeaders.get(i), work, startHeight + i)); + } + } + + public StoredBlock getHeader(int height) { + // Check active range first + if (height >= activeRangeStart && height <= activeRangeEnd) { + return activeRange.get(height); + } + + // Check recent headers + StoredBlock recent = recentHeaders.get(height); + if (recent != null) return recent; + + // Check checkpoints + return checkpoints.get(height); + } +} +``` + +**Usage**: +```java +SparseHeaderStore headerStore = new SparseHeaderStore(); + +// Phase 1: Download all headers, store checkpoints and recent +headerStore.downloadHeaders(); // Stores ~1400 checkpoints + 5000 recent + +// Phase 2: Download bodies in ranges +for (int rangeStart = 0; rangeStart < targetHeight; rangeStart += 50000) { + int rangeEnd = Math.min(rangeStart + 50000, targetHeight); + + // Load headers for this range (re-download if needed) + headerStore.loadRangeForDownload(rangeStart, rangeEnd); + + // Download bodies for this range + downloadBodiesInRange(rangeStart, rangeEnd); + + // Clear range to free memory + headerStore.clearActiveRange(); +} +``` + +**Pros**: +- ✅ Very low memory usage (~2MB: 1400 checkpoints + 5000 recent) +- ✅ Low disk usage (~200KB permanent) +- ✅ Enables range-based parallelization +- ✅ Excellent for mobile + +**Cons**: +- ❌ Need to re-download headers for each range +- ❌ More complex logic +- ❌ Slightly slower overall (re-downloading headers) + +--- + +#### Option 4: SQLite Database (Production Quality) ⭐⭐⭐⭐ + +**Concept**: Use SQLite for efficient, indexed header storage. + +**Implementation**: +```java +public class SQLiteHeaderStore { + private final Connection db; + + public SQLiteHeaderStore(File dbFile) throws SQLException { + db = DriverManager.getConnection("jdbc:sqlite:" + dbFile.getAbsolutePath()); + createSchema(); + } + + private void createSchema() throws SQLException { + db.createStatement().execute( + "CREATE TABLE IF NOT EXISTS headers (" + + " height INTEGER PRIMARY KEY," + + " hash BLOB NOT NULL," + + " header BLOB NOT NULL," + + " chainwork BLOB NOT NULL" + + ");" + + "CREATE INDEX IF NOT EXISTS idx_hash ON headers(hash);" + ); + + // Use WAL mode for better concurrent access + db.createStatement().execute("PRAGMA journal_mode=WAL;"); + + // Optimize for fast inserts during sync + db.createStatement().execute("PRAGMA synchronous=NORMAL;"); + } + + public void storeHeaders(List headers, int startHeight) throws SQLException { + db.setAutoCommit(false); + + try (PreparedStatement stmt = db.prepareStatement( + "INSERT OR REPLACE INTO headers (height, hash, header, chainwork) VALUES (?, ?, ?, ?)")) { + + for (int i = 0; i < headers.size(); i++) { + Block header = headers.get(i); + int height = startHeight + i; + + stmt.setInt(1, height); + stmt.setBytes(2, header.getHash().getBytes()); + stmt.setBytes(3, header.bitcoinSerialize()); + stmt.setBytes(4, calculateChainWork(header).toByteArray()); + stmt.addBatch(); + } + + stmt.executeBatch(); + db.commit(); + } catch (SQLException e) { + db.rollback(); + throw e; + } + } + + public StoredBlock getHeader(int height) throws SQLException { + try (PreparedStatement stmt = db.prepareStatement( + "SELECT header, chainwork FROM headers WHERE height = ?")) { + + stmt.setInt(1, height); + ResultSet rs = stmt.executeQuery(); + + if (rs.next()) { + byte[] headerBytes = rs.getBytes("header"); + byte[] chainwork = rs.getBytes("chainwork"); + Block header = new Block(params, headerBytes); + return new StoredBlock(header, new BigInteger(chainwork), height); + } + return null; + } + } + + public StoredBlock getHeaderByHash(Sha256Hash hash) throws SQLException { + try (PreparedStatement stmt = db.prepareStatement( + "SELECT height, header, chainwork FROM headers WHERE hash = ?")) { + + stmt.setBytes(1, hash.getBytes()); + ResultSet rs = stmt.executeQuery(); + + if (rs.next()) { + int height = rs.getInt("height"); + byte[] headerBytes = rs.getBytes("header"); + byte[] chainwork = rs.getBytes("chainwork"); + Block header = new Block(params, headerBytes); + return new StoredBlock(header, new BigInteger(chainwork), height); + } + return null; + } + } + + public void compact() throws SQLException { + // After body sync completes, remove old headers + // Keep only recent 5000 + checkpoints + db.createStatement().execute( + "DELETE FROM headers WHERE " + + " height < (SELECT MAX(height) - 5000 FROM headers) AND " + + " height % 2016 != 0" // Keep checkpoints + ); + db.createStatement().execute("VACUUM;"); + } +} +``` + +**Usage**: +```java +File headerDb = new File(walletDir, "headers.db"); +SQLiteHeaderStore headerStore = new SQLiteHeaderStore(headerDb); + +// Phase 1: Download and store all headers +int height = 0; +while (height < targetHeight) { + List headers = downloadHeaders(height); + headerStore.storeHeaders(headers, height); + height += headers.size(); +} + +// Phase 2: Parallel body download with random access +ParallelBodyDownloader downloader = new ParallelBodyDownloader(headerStore); +downloader.download(0, targetHeight, peers); + +// Phase 3: Cleanup +headerStore.compact(); // Reduce to ~200KB +``` + +**Pros**: +- ✅ Full random access to any header +- ✅ Excellent performance with proper indexes +- ✅ Mature, battle-tested technology +- ✅ Built into Android (no extra dependencies) +- ✅ Can compact after sync completes +- ✅ Transactional integrity + +**Cons**: +- ❌ Initial disk usage: ~150MB (compacts to ~200KB after) +- ❌ Slightly higher complexity + +--- + +### Recommended Approach + +**For Mobile/Android: Option 3 (Sparse Storage) + Option 1 (Streaming)** + +```java +public class MobileHeadersFirstSync { + private final SparseHeaderStore headerStore; + + public void sync() { + // Phase 1: Stream headers, store checkpoints + recent + streamAndValidateHeaders(); // ~2MB storage + + // Phase 2: Download bodies in ranges + for (BlockRange range : getRanges()) { + // Re-fetch headers for this range (cheap, headers are small) + headerStore.loadRangeForDownload(range.start, range.end); + + // Download bodies in parallel (3-5 peers) + downloadBodiesInParallel(range, 3); + + // Free range headers + headerStore.clearActiveRange(); + } + } +} +``` + +**For Desktop: Option 4 (SQLite)** + +Full-featured, reliable, and disk space is not a concern. + +--- + +### Performance Comparison + +| Strategy | Memory | Disk | Parallelization | Complexity | Mobile-Friendly | +|----------|--------|------|-----------------|------------|-----------------| +| Streaming (Option 1) | ~400KB | ~200KB | Limited | Low | ✅ Excellent | +| Temp File (Option 2) | ~50MB | ~111MB | Full | Medium | ⚠️ Moderate | +| Sparse (Option 3) | ~2MB | ~200KB | Range-based | Medium | ✅ Excellent | +| SQLite (Option 4) | ~5MB | ~150MB¹ | Full | Medium | ✅ Good | + +¹ Compacts to ~200KB after sync + +--- + +**Phase 2: Parallel Body Download** +```java +// Now that we have all block hashes, fetch bodies in parallel +ParallelBlockDownloader downloader = new ParallelBlockDownloader(); +downloader.downloadBlockBodies( + allBlockHashes, + availablePeers, + blockChain +); +``` + +### Expected Impact +- Enable true parallelization +- Headers download: ~100-200s (much faster than full sync) +- Body download: Can use all available peers efficiently +- Potential total sync time: **800-1200s** (vs current 2887s) +- **60-70% reduction** in sync time + +### Risks & Considerations +- **Major architectural change** - requires significant refactoring +- Changes sync model from BIP37 filtered blocks to headers-first +- May require changes to wallet notification model +- Need to maintain bloom filters during body fetch +- More complex error handling + +--- + +## Measurement & Validation + +### Additional Metrics to Track + +Add these fields to BlockPerformanceReport: + +```java +// Peer performance metrics +private long totalPeerSwitches = 0; +private Map peerMetrics = new ConcurrentHashMap<>(); + +// Pipeline metrics +private long totalPipelinedRequests = 0; +private long timeToFirstBlockSaved = 0; // Time saved by pipelining + +// TCP metrics +private long tcpRetransmits = 0; +private long averageRTT = 0; // Round-trip time + +// Batch size metrics +private int[] batchSizeDistribution = new int[10]; // Histogram +``` + +### Performance Testing Checklist + +Before and after each optimization: +- [ ] Measure total sync time +- [ ] Measure network wait time breakdown +- [ ] Measure CPU usage +- [ ] Measure memory usage +- [ ] Measure battery impact (on mobile) +- [ ] Test on different network conditions: + - [ ] WiFi (high bandwidth) + - [ ] 4G/LTE (medium bandwidth, higher latency) + - [ ] 3G (low bandwidth, high latency) +- [ ] Test with different peer qualities +- [ ] Verify blockchain integrity after sync + +--- + +## Implementation Roadmap + +### Phase 1: Quick Wins (Weeks 1-2) +**Estimated time savings: 300-500s (10-17%)** + +1. ✅ Implement TCP socket optimizations + - Low risk, immediate benefit + - Files: NioClientManager.java + +2. ✅ Add peer performance tracking + - Foundation for future optimizations + - Files: Peer.java, PeerGroup.java + +3. ✅ Implement GetBlocksMessage pipelining + - Medium complexity, high reward + - Files: Peer.java + +### Phase 2: Medium Effort (Weeks 3-4) +**Estimated time savings: 200-400s (7-14%)** + +4. ✅ Increase batch size (with testing) + - Test protocol limits carefully + - Files: Peer.java, GetBlocksMessage.java + +5. ✅ Implement peer selection based on performance + - Use metrics from Phase 1 + - Files: PeerGroup.java + +### Phase 3: Major Changes (Weeks 5-8) +**Estimated time savings: 800-1400s (28-48%)** + +6. ✅ Implement multi-peer parallel downloads + - Requires new coordinator component + - Files: New ParallelBlockDownloader.java, Peer.java, PeerGroup.java + +7. ⚠️ Consider headers-first approach (optional) + - Major architectural change + - Evaluate if earlier phases provide sufficient improvement + +--- + +## Expected Total Impact + +### Conservative Estimate +- Phase 1: 300s saved (10%) +- Phase 2: 300s saved (10%) +- Phase 3: 800s saved (28%) +- **Total: 1400s saved (48% reduction)** +- **New sync time: ~1500s (25 minutes)** + +### Optimistic Estimate +- Phase 1: 500s saved (17%) +- Phase 2: 400s saved (14%) +- Phase 3: 1400s saved (48%) +- **Total: 2300s saved (79% reduction)** +- **New sync time: ~600s (10 minutes)** + +### Target +**Reduce 2887s (48 min) to 1000-1500s (17-25 min)** with Phases 1-3. + +--- + +## Testing Strategy + +### Unit Tests +```java +@Test +public void testGetBlocksMessagePipelining() { + // Verify next request sent before batch completes + // Verify correct block ordering + // Verify no duplicate requests +} + +@Test +public void testPeerPerformanceTracking() { + // Verify metrics calculated correctly + // Verify peer ranking works + // Verify peer switching logic +} +``` + +### Integration Tests +```java +@Test +public void testParallelDownload() { + // Test with 2-3 mock peers + // Verify blocks merged correctly + // Verify handling of peer disconnection + // Verify no duplicate block processing +} +``` + +### Performance Tests +```java +@Test +public void benchmarkSyncTime() { + // Sync 10,000 blocks with optimization + // Compare to baseline + // Verify improvement +} +``` + +--- + +## Monitoring & Logging + +### Key Metrics to Log + +```java +log.info("=== Network Performance Summary ==="); +log.info("Total sync time: {}s", totalTime); +log.info("Network wait time: {}s ({}%)", networkWait, percentage); +log.info("Average batch size: {} blocks", avgBatchSize); +log.info("Pipeline efficiency: {}%", pipelineEfficiency); +log.info("Peer switches: {}", peerSwitches); +log.info("Top performing peer: {} ({} KB/s)", + bestPeer.getAddress(), + bestPeer.getThroughput() / 1024); +``` + +### Debug Logging for Troubleshooting + +```java +if (log.isDebugEnabled()) { + log.debug("[PIPELINE] Requesting next batch with {} blocks remaining", + blocksRemaining); + log.debug("[PEER] Peer {} throughput: {} KB/s, latency: {} ms", + peer.getAddress(), + peer.getThroughput() / 1024, + peer.getAverageLatency()); + log.debug("[TCP] Socket buffer sizes: recv={}, send={}", + socket.getReceiveBufferSize(), + socket.getSendBufferSize()); +} +``` + +--- + +## Configuration Options + +Add user-configurable options for network optimizations: + +```java +public class NetworkConfig { + // Pipelining + public boolean enablePipelining = true; + public int pipelineThreshold = 100; // Blocks remaining to trigger next request + + // Batch size + public int blocksPerRequest = 500; // Can be tuned based on network + + // TCP + public int socketReceiveBuffer = 512 * 1024; // 512KB + public int socketSendBuffer = 256 * 1024; // 256KB + public boolean tcpNoDelay = true; + + // Peer selection + public boolean enablePeerSelection = true; + public int peerEvaluationInterval = 30; // seconds + public double peerSwitchThreshold = 1.5; // 50% better to switch + + // Parallel download + public boolean enableParallelDownload = false; // Experimental + public int maxParallelPeers = 3; +} +``` + +--- + +## References + +- Bitcoin Protocol: https://en.bitcoin.it/wiki/Protocol_documentation +- BIP37 (Bloom Filters): https://github.com/bitcoin/bips/blob/master/bip-0037.mediawiki +- BIP130 (Headers-First): https://github.com/bitcoin/bips/blob/master/bip-0130.mediawiki +- TCP Optimization: https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt \ No newline at end of file diff --git a/designdocs/proposals/reverse-sync.md b/designdocs/proposals/reverse-sync.md new file mode 100644 index 000000000..5f815fc2d --- /dev/null +++ b/designdocs/proposals/reverse-sync.md @@ -0,0 +1,1034 @@ +# Reverse Block Synchronization for DashJ + +## Overview + +This document explores the concept of **reverse block synchronization** - downloading filtered blocks in reverse chronological order (newest to oldest) rather than the traditional forward order. The goal is to prioritize recent transactions that are more likely to be relevant to the user, providing faster "time-to-first-transaction" in the wallet UI. + +### Motivation + +Traditional blockchain sync downloads blocks from genesis (or fast-catchup point) forward to the chain tip. For users, this means: +- **Long wait time** before seeing recent transactions +- **Poor UX** during initial wallet setup (restoration) +- **Delayed gratification** - users can't see their most recent payments until full sync completes + +Reverse sync would: +- **Show recent transactions first** - users see their latest balance quickly +- **Better user experience** - immediate feedback on wallet state +- **Incremental completion** - wallet becomes useful faster + +### Proposed Approach + +Following DIP-16 headers-first synchronization: +1. **HEADERS stage**: Download all headers forward (as normal) → Establishes chain tip +2. **MNLIST stage**: Sync masternode lists and LLMQ quorums (as normal) → Required for validation +3. **PREBLOCKS stage**: Optional preprocessing (as normal) +4. **BLOCKS stage (MODIFIED)**: Download filtered blocks in **reverse** order, 500 blocks at a time + - Start from chain tip (headerChain.getChainHead()) + - Request blocks in batches: [tip-499, tip-498, ..., tip-1, tip] + - Work backwards to the fast-catchup point or genesis + +--- + +## Key Advantage: Headers Already Downloaded (DIP-16) + +**CRITICAL INSIGHT**: With DIP-16 headers-first synchronization, by the time we reach the BLOCKS stage, we already have: + +✅ **Complete header chain** (`headerChain`) from genesis to tip +✅ **All block hashes** for every block in the canonical chain +✅ **Block heights** mapped to hashes +✅ **Parent-child relationships** (via `prevBlockHash` in headers) +✅ **Cumulative chainwork** for the entire chain +✅ **Checkpoint validation** already passed during HEADERS stage + +This **fundamentally changes** the reverse sync feasibility because: + +1. **We know the canonical chain structure** - No ambiguity about which blocks to request +2. **We can validate block-to-header matching** - Verify downloaded blocks match their headers +3. **We can build accurate locators** - Reference blocks by header hash even without bodies +4. **We avoid orphan handling complexity** - We know exactly where each block fits +5. **We can defer only transaction validation** - Block structure is already validated + +### What Headers Enable + +**From headerChain, we can access:** + +```java +// Get header for any height +StoredBlock headerAtHeight = headerChain.getBlockStore().get(targetHeight); + +// Get block hash without having the block body +Sha256Hash blockHash = headerAtHeight.getHeader().getHash(); + +// Get parent hash +Sha256Hash parentHash = headerAtHeight.getHeader().getPrevBlockHash(); + +// Verify a downloaded block matches its expected header +boolean matches = downloadedBlock.getHash().equals(headerAtHeight.getHeader().getHash()); + +// Get chainwork for validation +BigInteger chainWork = headerAtHeight.getChainWork(); +``` + +**This solves or mitigates many pitfalls discussed below!** + +--- + +## Critical Pitfalls (Re-evaluated with Headers) + +> **Note**: The following pitfalls are re-evaluated considering that we have complete headers from DIP-16. + +### 1. **Block Chain Validation Dependency** + +**Problem**: Blocks validate against their parent blocks. Validation requires: +- Previous block's hash matches `block.getPrevBlockHash()` +- Cumulative difficulty/chainwork from genesis +- Transaction inputs spending outputs from previous blocks + +**Impact**: Cannot validate blocks in reverse order without their parents. + +**Severity**: 🔴 **CRITICAL** - Core blockchain invariant violated + +**✅ MITIGATED BY HEADERS**: Can validate block hash matches header! Can skip PoW validation. + +**With headers, we can**: +```java +// Validate block matches its expected header +StoredBlock expectedHeader = headerChain.getBlockStore().get(blockHeight); +if (!downloadedBlock.getHash().equals(expectedHeader.getHeader().getHash())) { + throw new VerificationException("Block doesn't match header at height " + blockHeight); +} + +// Verify parent relationship (even in reverse) +if (!downloadedBlock.getPrevBlockHash().equals(expectedHeader.getHeader().getPrevBlockHash())) { + throw new VerificationException("Block parent mismatch"); +} + +// Skip PoW validation - already done on headers +// Just verify transactions match merkle root +``` + +**Remaining Issue**: Transaction input validation still requires forward order (outputs before spends). + +**Severity After Headers**: 🟡 **MEDIUM** - Block structure validated, only transaction validation deferred + +--- + +### 2. **SPVBlockStore Ring Buffer Design** + +**Problem**: SPVBlockStore uses a ring buffer with forward-only assumptions: +- Ring cursor advances forward: `setRingCursor(buffer, buffer.position())` +- Capacity of 5000 blocks (DEFAULT_CAPACITY) +- Wraps around when full +- Get operations assume sequential forward insertion + +**Impact**: +- Reverse insertion would corrupt the ring buffer ordering +- Chain head tracking assumes forward progression +- Ring cursor movement would be backwards + +**From SPVBlockStore.java:184-200:** +```java +public void put(StoredBlock block) throws BlockStoreException { + lock.lock(); + try { + int cursor = getRingCursor(buffer); + if (cursor == fileLength) { + cursor = FILE_PROLOGUE_BYTES; // Wrap around + } + buffer.position(cursor); + // Write block at cursor + setRingCursor(buffer, buffer.position()); // Advance forward + blockCache.put(hash, block); + } finally { + lock.unlock(); + } +} +``` + +**Severity**: 🔴 **CRITICAL** - Storage layer incompatible with reverse insertion + +--- + +### 3. **Orphan Block Handling Reversal** + +**Problem**: In forward sync, orphan blocks are blocks received before their parent. In reverse sync, **every block is initially an orphan** (its parent hasn't been downloaded yet). + +**Impact**: +- Orphan block storage would explode in memory +- `tryConnectingOrphans()` assumes forward chain building +- Orphan eviction policies designed for rare edge cases, not normal operation + +**From AbstractBlockChain.java:130,468:** +```java +private final LinkedHashMap orphanBlocks = new LinkedHashMap<>(); + +// In normal sync: +orphanBlocks.put(block.getHash(), new OrphanBlock(block, filteredTxHashList, filteredTxn)); +tryConnectingOrphans(); // Tries to connect orphans to chain +``` + +**In reverse sync**: Every single block would be orphaned initially! + +**Severity**: 🔴 **CRITICAL** - Memory exhaustion, wrong orphan semantics + +**✅ COMPLETELY SOLVED BY HEADERS**: No orphan handling needed! + +**With headers, we know**: +```java +// We know exactly which block to request at each height +for (int height = tipHeight; height >= fastCatchupHeight; height -= 500) { + // Request blocks by height range - no orphans possible + StoredBlock headerAtHeight = headerChain.getBlockStore().get(height); + Sha256Hash expectedHash = headerAtHeight.getHeader().getHash(); + + // When block arrives, we know exactly where it goes + // No orphan storage needed! +} +``` + +**Why this works**: +- Headers define the canonical chain +- We request blocks in a specific order (even if reverse) +- Each block's position is pre-determined by its header +- No ambiguity about block relationships + +**Severity After Headers**: 🟢 **SOLVED** - Orphan handling not needed + +--- + +### 4. **Transaction Input Validation** + +**Problem**: SPV clients validate transactions by checking: +- Inputs reference outputs from bloom filter-matched transactions +- Outputs are created before being spent +- UTXO set consistency + +**Impact**: In reverse order: +- Transaction spends appear **before** the outputs they're spending +- Cannot validate input scripts without the referenced output +- Bloom filter might not include outputs we discover later + +**Example**: +``` +Block 1000: TX_A creates output X +Block 1001: TX_B spends output X + +Reverse sync receives: +1. Block 1001 first → TX_B tries to spend X (doesn't exist yet!) +2. Block 1000 later → TX_A creates X (now B makes sense) +``` + +**Severity**: 🔴 **CRITICAL** - Transaction validation impossible + +--- + +### 5. **Bloom Filter Incompleteness** + +**Problem**: Bloom filters are created based on: +- Known wallet addresses +- Known public keys +- Previously received outputs + +**Impact**: In reverse sync: +- Filter may not include outputs we haven't discovered yet +- HD wallet key lookahead might miss transactions +- P2PK outputs wouldn't trigger filter updates properly + +**From blockchain-sync-bip37.md**: Filter exhaustion handling assumes forward progression to detect missing keys. + +**Severity**: 🟡 **HIGH** - May miss transactions, incorrect balance + +--- + +### 6. **Masternode List State Consistency** + +**Problem**: Deterministic masternode lists build forward from genesis: +- `mnlistdiff` messages are incremental forward deltas +- Quorum commitments reference historical block heights +- InstantSend/ChainLock validation requires correct quorum at block height + +**Impact**: +- Cannot validate ChainLocks on blocks without knowing historical quorum state +- InstantSend locks reference quorums that we haven't validated yet (in reverse) +- Masternode list state would be inconsistent going backwards + +**Severity**: 🔴 **CRITICAL** - Dash-specific features broken + +--- + +### 7. **LLMQ Quorum Validation** + +**Problem**: LLMQ quorums have lifecycle events: +- Formation at specific heights +- Rotation based on block count +- Signature aggregation across time + +**Impact**: +- Quorum validation expects forward time progression +- ChainLock signatures reference future (in reverse) quorums +- Cannot verify quorum commitments in reverse + +**From QuorumState.java**: Quorum state builds forward through block processing. + +**Severity**: 🔴 **CRITICAL** - ChainLock/InstantSend validation broken + +--- + +### 8. **Block Locator Construction** + +**Problem**: Block locators assume forward chain building: +- Exponential backoff from chain head +- Last 100 blocks sequential + +**Impact**: +- Reverse block locators would need to reference future blocks (not yet downloaded) +- Peer would be confused by requests that don't match chain topology + +**From blockchain-sync-bip37.md**: +``` +Build locator: [head, head-1, ..., head-99, head-101, head-105, ..., genesis] +``` + +**In reverse**: Head is known (from headers), but intermediate blocks aren't in blockChain yet. + +**Severity**: 🟡 **HIGH** - Protocol incompatibility + +**✅ COMPLETELY SOLVED BY HEADERS**: Can build perfect locators! + +**With headers**: +```java +// Build locator using headerChain (already has all headers) +private BlockLocator buildReverseBlockLocator(int targetHeight) { + BlockLocator locator = new BlockLocator(); + + // Use headerChain, not blockChain + StoredBlock cursor = headerChain.getBlockStore().get(targetHeight); + + // Standard locator construction works perfectly + for (int i = 0; i < 100 && cursor != null; i++) { + locator.add(cursor.getHeader().getHash()); + cursor = headerChain.getBlockStore().get(cursor.getHeight() - 1); + } + + int step = 1; + while (cursor != null && cursor.getHeight() > 0) { + locator.add(cursor.getHeader().getHash()); + step *= 2; + cursor = headerChain.getBlockStore().get(cursor.getHeight() - step); + } + + return locator; +} +``` + +**Severity After Headers**: 🟢 **SOLVED** - Headers enable perfect locators + +--- + +### 9. **Checkpoint Validation** + +**Problem**: Checkpoints validate forward progression: +- `params.passesCheckpoint(height, hash)` checks blocks connect to known checkpoints +- Assumes building up to checkpoints, not down from them + +**Impact**: Checkpoint validation would fail or give false security in reverse order. + +**Severity**: 🟡 **MEDIUM** - Security feature degraded + +**✅ COMPLETELY SOLVED BY HEADERS**: Checkpoints already validated! + +**With headers**: +- All headers passed checkpoint validation during HEADERS stage +- Blocks must match headers (which already passed checkpoints) +- No additional checkpoint validation needed during BLOCKS stage + +**Severity After Headers**: 🟢 **SOLVED** - Checkpoints already enforced on headers + +--- + +### 10. **Progress Tracking Inversion** + +**Problem**: Download progress assumes forward sync: +- "Blocks left" calculation: `peer.getBestHeight() - blockChain.getChainHead().getHeight()` +- Progress percentage based on catching up to tip + +**Impact**: Progress would appear to go backwards, confusing UX. + +**Severity**: 🟢 **LOW** - UX issue only, fixable + +--- + +### 11. **Reorganization Detection** + +**Problem**: Reorgs detected by: +- New block has more chainwork than current chain head +- Finding split point going backwards from both heads + +**Impact**: In reverse sync: +- Cannot detect reorgs properly (don't have the chain to compare against) +- Split point finding assumes forward-built chain exists + +**Severity**: 🟡 **HIGH** - Cannot handle chain reorgs during sync + +**✅ PARTIALLY SOLVED BY HEADERS**: Reorgs detected at header level! + +**With headers**: +- If chain reorgs during BLOCKS stage, HEADERS stage would detect it first +- Headers chain is canonical - blocks just need to match +- Reorg during block download would manifest as header mismatch + +**However**: +- Need to handle case where we're downloading blocks for a header chain that reorgs mid-download +- Solution: Validate blocks match current headerChain; restart if headerChain changes + +**Severity After Headers**: 🟡 **MEDIUM** - Detectable, requires restart on reorg + +--- + +### 12. **Fast Catchup Time Interaction** + +**Problem**: Fast catchup downloads only headers before a timestamp, then switches to full blocks: +```java +if (header.getTimeSeconds() >= fastCatchupTimeSecs) { + this.downloadBlockBodies = true; +} +``` + +**Impact**: In reverse sync, we'd start with full blocks (newest) and switch to headers-only (oldest) - opposite semantics. + +**Severity**: 🟡 **MEDIUM** - Optimization strategy incompatible + +--- + +### 13. **Wallet Transaction Dependency Order** + +**Problem**: Wallets track: +- Transaction chains (tx A creates output, tx B spends it) +- Balance updates (credits before debits) +- Confidence building (confirmations increase forward) + +**Impact**: In reverse: +- Debits appear before credits +- Transaction chains appear in reverse dependency order +- Confidence would decrease as we go back in time (confusing) + +**Severity**: 🟡 **MEDIUM** - Wallet state confusion + +--- + +### 14. **Peer Protocol Assumptions** + +**Problem**: P2P protocol messages assume forward sync: +- `GetBlocksMessage` requests blocks after a locator (forward direction) +- `InvMessage` announces blocks in forward order +- Peers expect sequential requests + +**Impact**: Would need to reverse the protocol semantics or work around peer expectations. + +**Severity**: 🟡 **HIGH** - Protocol violation, peers may reject + +--- + +### 15. **Memory Pressure During Reverse Accumulation** + +**Problem**: In forward sync, blocks are validated and added to chain immediately. In reverse sync, blocks must be: +- Stored in memory until we have their parents +- Held for batch validation +- Queued for out-of-order processing + +**Impact**: +- Memory usage proportional to number of unvalidated blocks +- 500 blocks × average size = significant memory +- Risk of OOM on mobile devices + +**Severity**: 🟡 **MEDIUM** - Resource constraint on mobile + +--- + +## Implementation Requirements + +To implement reverse block synchronization safely, the following changes would be necessary: + +### Phase 1: Storage Layer Modifications + +#### 1. **Dual-Mode SPVBlockStore** + +**Requirement**: Extend SPVBlockStore to support reverse insertion without corrupting the ring buffer. + +**Approach**: +- Add `putReverse(StoredBlock block)` method +- Maintain separate reverse ring cursor +- Use temporary storage for reverse blocks +- Preserve forward-only chain head semantics + +**Implementation**: +```java +public class SPVBlockStore { + // Existing forward cursor + private int forwardCursor; + + // NEW: Reverse insertion cursor + private int reverseCursor; + + // NEW: Temporary reverse block storage + private TreeMap reverseBlockBuffer; + + public void putReverse(StoredBlock block) throws BlockStoreException { + // Store in temporary buffer, not ring + reverseBlockBuffer.put(block.getHeight(), block); + } + + public void finalizeReverseBlocks() throws BlockStoreException { + // Once we have all blocks, insert them forward into ring buffer + for (StoredBlock block : reverseBlockBuffer.values()) { + put(block); // Use normal forward insertion + } + reverseBlockBuffer.clear(); + } +} +``` + +**Complexity**: 🟡 **MEDIUM** - Requires careful buffer management + +--- + +#### 2. **Temporary Reverse Chain Structure** + +**Requirement**: Create a parallel chain structure to hold reverse-downloaded blocks until validation. + +**Approach**: +- `ReverseBlockChain` class holds blocks by height +- Maps block hash → StoredBlock for lookup +- Ordered by height descending (tip to oldest) +- Not connected to main `blockChain` until finalized + +**Implementation**: +```java +public class ReverseBlockChain { + private final TreeMap blocksByHeight = new TreeMap<>(Collections.reverseOrder()); + private final Map blocksByHash = new HashMap<>(); + private final int startHeight; // Chain tip height + private final int endHeight; // Fast-catchup or genesis height + + public void addBlock(Block block, int height) { + blocksByHeight.put(height, block); + blocksByHash.put(block.getHash(), block); + } + + public boolean isComplete() { + // Check if we have all blocks from startHeight to endHeight + return blocksByHeight.size() == (startHeight - endHeight + 1); + } + + public List getBlocksForwardOrder() { + return Lists.reverse(new ArrayList<>(blocksByHeight.values())); + } +} +``` + +**Complexity**: 🟢 **LOW** - Straightforward data structure + +--- + +### Phase 2: Validation Deferral + +#### 3. **Deferred Block Validation** + +**Requirement**: Skip validation during reverse download, batch validate after completion. + +**Approach**: +- Add `deferValidation` flag to `AbstractBlockChain.add()` +- Store blocks without validation +- After reverse sync completes, validate in forward order +- Roll back on validation failure + +**Implementation**: +```java +public class AbstractBlockChain { + private boolean deferValidation = false; + private List deferredBlocks = new ArrayList<>(); + + public void enableDeferredValidation() { + this.deferValidation = true; + } + + public boolean add(Block block) throws VerificationException { + if (deferValidation) { + deferredBlocks.add(block); + return true; // Assume valid for now + } + // Normal validation + return addWithValidation(block); + } + + public void validateDeferredBlocks() throws VerificationException { + deferValidation = false; + for (Block block : deferredBlocks) { + if (!addWithValidation(block)) { + throw new VerificationException("Deferred block failed validation: " + block.getHash()); + } + } + deferredBlocks.clear(); + } +} +``` + +**Complexity**: 🟡 **MEDIUM** - Requires careful state management + +--- + +#### 4. **Transaction Validation Queue** + +**Requirement**: Queue transaction validations until we have the full block range. + +**Approach**: +- Skip input validation during reverse sync +- Record transactions for later validation +- Validate transaction chains in forward order after completion + +**Implementation**: +```java +public class WalletTransactionValidator { + private Map pendingValidation = new HashMap<>(); + + public void queueForValidation(Transaction tx) { + pendingValidation.put(tx.getTxId(), tx); + } + + public void validateQueuedTransactions(Wallet wallet) throws VerificationException { + // Sort by block height (if known) or topologically + List sorted = topologicalSort(pendingValidation.values()); + for (Transaction tx : sorted) { + wallet.validateTransaction(tx); + } + pendingValidation.clear(); + } +} +``` + +**Complexity**: 🔴 **HIGH** - Topological sorting, dependency tracking + +--- + +### Phase 3: Protocol Adaptation + +#### 5. **Reverse Block Locator** + +**Requirement**: Create block locators that reference the tip (known) and work backwards. + +**Approach**: +- Use headerChain (already complete) to build locators +- Reference blocks by header hash (not in blockChain yet) +- Peer responds with blocks going forward from locator match + +**Implementation**: +```java +public class Peer { + private BlockLocator buildReverseBlockLocator(int targetHeight) { + BlockLocator locator = new BlockLocator(); + + // Use headerChain since it has all headers + StoredBlock cursor = headerChain.getBlockStore().get(targetHeight); + + // Add 100 blocks going backward from target + for (int i = 0; i < 100 && cursor != null; i++) { + locator.add(cursor.getHeader().getHash()); + cursor = headerChain.getBlockStore().get(cursor.getHeight() - 1); + } + + // Exponential backoff going further back + int step = 1; + while (cursor != null && cursor.getHeight() > 0) { + locator.add(cursor.getHeader().getHash()); + step *= 2; + cursor = headerChain.getBlockStore().get(cursor.getHeight() - step); + } + + return locator; + } +} +``` + +**Complexity**: 🟢 **LOW** - Leverages existing headerChain + +--- + +#### 6. **Reverse GetBlocks Request** + +**Requirement**: Request blocks in reverse order, 500 at a time. + +**Approach**: +- Use `GetBlocksMessage` with locator pointing to (tip - 500) +- Request filtered blocks from (tip - 499) to tip +- Move backwards in 500-block chunks + +**Implementation**: +```java +public class Peer { + private void reverseBlockChainDownloadLocked(int startHeight) { + int endHeight = Math.max(startHeight - 500, fastCatchupHeight); + + // Build locator pointing to endHeight + BlockLocator locator = buildReverseBlockLocator(endHeight); + + // stopHash is the tip of this range + Sha256Hash stopHash = headerChain.getBlockStore().get(startHeight).getHeader().getHash(); + + GetBlocksMessage message = new GetBlocksMessage(params, locator, stopHash); + sendMessage(message); + + // Peer will respond with InvMessage containing blocks from endHeight to startHeight + } +} +``` + +**Complexity**: 🟡 **MEDIUM** - Protocol semantics adapted + +--- + +### Phase 4: Dash-Specific Handling + +#### 7. **Masternode List State Snapshot** + +**Requirement**: Use already-synced masternode list from MNLIST stage (DIP-16). + +**Approach**: +- Masternode list already synced to chain tip during MNLIST stage +- Use this state for all ChainLock/InstantSend validations +- Do NOT attempt to rebuild masternode list in reverse + +**Rationale**: DIP-16 already solved this - we have the full masternode list before BLOCKS stage starts. + +**Complexity**: 🟢 **LOW** - Already available from DIP-16 + +--- + +#### 8. **ChainLock Validation with Forward State** + +**Requirement**: Validate ChainLocks using the quorum state from MNLIST stage. + +**Approach**: +- Quorum state is already at chain tip (from MNLIST stage) +- Historical ChainLocks can be validated if we have quorum at that height +- May need to skip ChainLock validation for very old blocks + +**Implementation**: +```java +public class ChainLocksHandler { + public boolean validateChainLockInReverse(Block block, ChainLockSignature cls) { + // We have current quorum state from MNLIST stage + // Can we validate this historical ChainLock? + int quorumHeight = block.getHeight() - (block.getHeight() % LLMQParameters.interval); + + if (quorumStateAtHeight(quorumHeight) != null) { + return verifyChainLockSignature(block, cls); + } else { + // Too old, quorum state not available + log.warn("Skipping ChainLock validation for old block: {}", block.getHeight()); + return true; // Assume valid + } + } +} +``` + +**Complexity**: 🟡 **MEDIUM** - May lose some validation guarantees + +--- + +#### 9. **InstantSend Lock Handling** + +**Requirement**: Handle InstantSend locks in reverse. + +**Approach**: +- InstantSend locks reference transactions +- In reverse, transaction might appear before its lock +- Queue locks for validation after transaction appears + +**Complexity**: 🟡 **MEDIUM** - Reverse dependency handling + +--- + +### Phase 5: Wallet Integration + +#### 10. **Wallet Notification Order** + +**Requirement**: Notify wallet of transactions in reverse but maintain balance consistency. + +**Approach**: +- Hold wallet notifications until batch is complete +- Sort transactions by height before notifying +- Update balance in forward order (oldest to newest) + +**Implementation**: +```java +public class Wallet { + private List pendingNotifications = new ArrayList<>(); + + public void queueReverseSyncTransaction(Transaction tx, int height) { + pendingNotifications.add(new WalletTransaction(tx, height)); + // Don't notify listeners yet + } + + public void flushReverseSyncNotifications() { + // Sort by height ascending + pendingNotifications.sort(Comparator.comparingInt(WalletTransaction::getHeight)); + + // Notify in forward order + for (WalletTransaction wtx : pendingNotifications) { + notifyTransactionListeners(wtx.tx); + } + + pendingNotifications.clear(); + } +} +``` + +**Complexity**: 🟢 **LOW** - Straightforward batching + +--- + +#### 11. **Bloom Filter Pre-population** + +**Requirement**: Ensure bloom filter includes outputs we'll discover in reverse. + +**Approach**: +- Increase bloom filter lookahead depth +- Use larger filter initially +- Recalculate filter after each reverse batch completes + +**Implementation**: +```java +public class PeerGroup { + public void prepareForReverseSync() { + // Increase lookahead for all wallets + for (Wallet wallet : wallets) { + wallet.setKeyLookaheadSize(200); // Increased from 100 + } + + // Force larger bloom filter + bloomFilterMerger.setBloomFilterFPRate(0.00001); // Lower FP rate = larger filter + recalculateFastCatchupAndFilter(FilterRecalculateMode.FORCE_SEND_FOR_REFRESH); + } +} +``` + +**Complexity**: 🟢 **LOW** - Parameter tuning + +--- + +### Phase 6: Progress & UX + +#### 12. **Reverse Progress Tracking** + +**Requirement**: Update progress calculation for reverse sync. + +**Approach**: +- Track "blocks remaining" going backwards +- Show user recent transactions first (better UX) +- Reverse progress percentage calculation + +**Implementation**: +```java +public class DownloadProgressTracker { + private int reverseStartHeight; + private int reverseEndHeight; + + public void startReverseSync(int startHeight, int endHeight) { + this.reverseStartHeight = startHeight; + this.reverseEndHeight = endHeight; + } + + @Override + public void onBlocksDownloaded(Peer peer, Block block, @Nullable FilteredBlock fb, int blocksLeft) { + if (isReverseSync) { + int downloaded = reverseStartHeight - block.getHeight(); + int total = reverseStartHeight - reverseEndHeight; + double progress = (double) downloaded / total; + + // Notify UI: "Syncing recent blocks: 65% (showing newest first)" + notifyProgress(progress, "recent-first"); + } + } +} +``` + +**Complexity**: 🟢 **LOW** - UX improvement + +--- + +#### 13. **Hybrid Sync Strategy** + +**Requirement**: Combine reverse and forward sync for optimal UX. + +**Approach**: +1. Download last 500-1000 blocks in reverse (most recent transactions) +2. Show wallet UI as "partially synced" +3. Then download remaining blocks in forward order +4. Finalize validation when complete + +**Benefits**: +- User sees recent activity immediately +- Less memory pressure (smaller reverse batch) +- Still get full sync eventually + +**Complexity**: 🟡 **MEDIUM** - Coordination logic + +--- + +### Phase 7: Finalization & Validation + +#### 14. **Batch Validation After Reverse Completion** + +**Requirement**: Validate all reverse-downloaded blocks in forward order once complete. + +**Approach**: +```java +public class ReverseSyncCoordinator { + private ReverseBlockChain reverseChain; + private AbstractBlockChain blockChain; + + public void finalizeReverseSync() throws BlockStoreException, VerificationException { + log.info("Reverse sync complete, validating {} blocks in forward order", + reverseChain.size()); + + // Get blocks in forward order (oldest to newest) + List blocksForward = reverseChain.getBlocksForwardOrder(); + + // Validate and add to main chain + for (Block block : blocksForward) { + if (!blockChain.add(block)) { + throw new VerificationException("Block failed validation during finalization: " + + block.getHash()); + } + } + + // Flush wallet notifications + for (Wallet wallet : wallets) { + wallet.flushReverseSyncNotifications(); + } + + log.info("Reverse sync finalization complete"); + } +} +``` + +**Complexity**: 🟡 **MEDIUM** - Critical validation step + +--- + +#### 15. **Rollback on Validation Failure** + +**Requirement**: Handle case where reverse-downloaded blocks fail validation. + +**Approach**: +- Keep reverse chain separate until validation passes +- On failure, discard reverse chain +- Fall back to traditional forward sync +- Notify user of sync failure + +**Complexity**: 🟡 **MEDIUM** - Error handling + +--- + +## Summary of Complexity + +| Category | Requirements | Complexity | Risk | +|----------|--------------|------------|------| +| **Storage** | Dual-mode SPVBlockStore, Reverse chain structure | 🟡 MEDIUM | 🟡 MEDIUM | +| **Validation** | Deferred validation, Transaction queuing | 🔴 HIGH | 🔴 HIGH | +| **Protocol** | Reverse locators, Adapted GetBlocks | 🟡 MEDIUM | 🟡 MEDIUM | +| **Dash-Specific** | Masternode state, ChainLock validation | 🟡 MEDIUM | 🔴 HIGH | +| **Wallet** | Notification order, Bloom filter | 🟢 LOW | 🟢 LOW | +| **UX** | Progress tracking, Hybrid strategy | 🟢 LOW | 🟢 LOW | +| **Finalization** | Batch validation, Rollback | 🟡 MEDIUM | 🔴 HIGH | + +**Overall Assessment**: 🔴 **HIGH COMPLEXITY, HIGH RISK** + +--- + +## Alternative: Hybrid Approach (Recommended) + +Given the significant challenges of full reverse sync, a **hybrid approach** may be more practical: + +### Two-Phase Sync Strategy + +**Phase 1: Reverse "Preview" Sync (500-1000 blocks)** +- Download ONLY the most recent 500-1000 blocks in reverse +- Use temporary storage (not SPVBlockStore) +- Show transactions to user as "preliminary" or "syncing" +- Skip full validation (rely on ChainLocks for recent blocks) + +**Phase 2: Forward Historical Sync** +- After preview, download remaining blocks in forward order (traditional) +- Validate fully as normal +- Merge with preview data +- Mark wallet as "fully synced" + +### Benefits +- ✅ User sees recent transactions in ~30 seconds +- ✅ Avoids most validation issues (only 500 blocks held in memory) +- ✅ Reuses existing forward sync infrastructure +- ✅ Lower risk, easier to implement +- ✅ Graceful degradation (if preview fails, continue with forward sync) + +### Implementation Outline +```java +public class HybridSyncStrategy { + private static final int PREVIEW_BLOCKS = 500; + + public void syncBlockchain() { + // DIP-16 Stages 1-3 (as normal) + downloadHeaders(); + downloadMasternodeLists(); + + // Phase 1: Reverse preview + List recentBlocks = downloadRecentBlocksReverse(PREVIEW_BLOCKS); + showPreviewToUser(recentBlocks); // "Syncing: showing recent activity" + + // Phase 2: Forward historical + downloadRemainingBlocksForward(); // Traditional sync + finalizeAndValidate(); + markWalletFullySynced(); + } +} +``` + +**Complexity**: 🟢 **MEDIUM** (much lower than full reverse) +**Risk**: 🟡 **MEDIUM** (acceptable for UX improvement) +**UX Gain**: 🟢 **HIGH** (fast initial feedback) + +--- + +## Conclusion + +Full reverse block synchronization presents **15 critical pitfalls** spanning storage, validation, protocol, and Dash-specific concerns. While theoretically possible, the implementation complexity and risk are substantial. + +**Recommendations**: + +1. **For Production**: Implement the **Hybrid Approach** (reverse preview + forward historical) + - Achieves primary UX goal (fast recent transaction visibility) + - Manageable complexity and risk + - Reuses existing infrastructure + +2. **For Research**: Prototype full reverse sync as a proof-of-concept + - Validate feasibility of deferred validation + - Measure memory pressure with real data + - Test Dash-specific feature compatibility + +3. **Alternative UX Improvements** (lower hanging fruit): + - Show estimated balance based on headers + ChainLocks + - Display "syncing" state with partial data + - Parallel sync of multiple block ranges (multi-peer) + - Faster header validation with batch PoW checks + +The **hybrid approach balances innovation with pragmatism**, delivering improved UX without the extreme engineering challenges of full reverse synchronization. + +--- + +## References + +- **blockchain-sync-bip37.md** - Current synchronization implementation +- **SPVBlockStore.java** (line 40-200) - Ring buffer storage constraints +- **AbstractBlockChain.java** (line 130, 468) - Orphan block handling +- **Peer.java** (line 1595-1775) - Block download protocol +- **DIP-16** - Headers-first synchronization stages From d815803129886391c69ca97c04d7326b69d97fdf Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Wed, 14 Jan 2026 20:47:22 -0800 Subject: [PATCH 4/9] feat: improve wallet tx-tracking --- .../main/java/org/bitcoinj/wallet/Wallet.java | 20 +++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/core/src/main/java/org/bitcoinj/wallet/Wallet.java b/core/src/main/java/org/bitcoinj/wallet/Wallet.java index 7fdc806df..143dd28e7 100644 --- a/core/src/main/java/org/bitcoinj/wallet/Wallet.java +++ b/core/src/main/java/org/bitcoinj/wallet/Wallet.java @@ -235,8 +235,8 @@ protected boolean removeEldestEntry(Map.Entry eldest) { // side effect of how the code is written (e.g. during re-orgs confidence data gets adjusted multiple times). private int onWalletChangedSuppressions; private boolean insideReorg; - private Map confidenceChanged; - private final ArrayList manualConfidenceChangeTransactions = Lists.newArrayList(); + private final Map confidenceChanged; + private final HashMap manualConfidenceChangeTransactions = Maps.newHashMap(); protected volatile WalletFiles vFileManager; // Object that is used to send transactions asynchronously when the wallet requires it. protected volatile TransactionBroadcaster vTransactionBroadcaster; @@ -2655,7 +2655,7 @@ public void notifyNewBestBlock(StoredBlock block) throws VerificationException { } } } else { - for (Transaction tx : manualConfidenceChangeTransactions) { + for (Transaction tx : manualConfidenceChangeTransactions.keySet()) { if (ignoreNextNewBlock.contains(tx.getTxId())) { // tx was already processed in receive() due to it appearing in this block, so we don't want to // increment the tx confidence depth twice, it'd result in miscounting. @@ -2670,7 +2670,8 @@ public void notifyNewBestBlock(StoredBlock block) throws VerificationException { // included once again. We could have a separate was-in-chain-and-now-isn't confidence type // but this way is backwards compatible with existing software, and the new state probably // wouldn't mean anything different to just remembering peers anyway. - if (confidence.incrementDepthInBlocks() > context.getEventHorizon()) + confidence.setDepthInBlocks(lastBlockSeenHeight - confidence.getAppearedAtChainHeight() + 1); + if (confidence.getDepthInBlocks() > context.getEventHorizon()) confidence.clearBroadcastBy(); confidenceChanged.put(tx, TransactionConfidence.Listener.ChangeReason.DEPTH); } @@ -6587,7 +6588,7 @@ public void unlockOutput(TransactionOutPoint outPoint) { public void addManualNotifyConfidenceChangeTransaction(Transaction tx) { lock.lock(); try { - manualConfidenceChangeTransactions.add(tx); + manualConfidenceChangeTransactions.merge(tx, 1, Integer::sum); } finally { lock.unlock(); } @@ -6596,7 +6597,14 @@ public void addManualNotifyConfidenceChangeTransaction(Transaction tx) { public void removeManualNotifyConfidenceChangeTransaction(Transaction tx) { lock.lock(); try { - manualConfidenceChangeTransactions.remove(tx); + Integer count = manualConfidenceChangeTransactions.get(tx); + if (count != null) { + if (count == 1) { + manualConfidenceChangeTransactions.remove(tx); + } else { + manualConfidenceChangeTransactions.put(tx, count - 1); + } + } } finally { lock.unlock(); } From 83616189815237548d50f23aec29b1cc3f663f4b Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Wed, 14 Jan 2026 20:47:54 -0800 Subject: [PATCH 5/9] feat: offload dsq message processing to another thread --- .../coinjoin/CoinJoinClientManager.java | 2 +- .../coinjoin/CoinJoinClientQueueManager.java | 9 ++---- .../coinjoin/utils/CoinJoinManager.java | 32 ++++++++++++++++++- 3 files changed, 35 insertions(+), 8 deletions(-) diff --git a/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.java b/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.java index 957393037..0547adc0e 100644 --- a/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.java +++ b/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientManager.java @@ -368,7 +368,7 @@ public boolean trySubmitDenominate(MasternodeAddress mnAddr) { public boolean markAlreadyJoinedQueueAsTried(CoinJoinQueue dsq) { lock.lock(); try { - for (CoinJoinClientSession session :deqSessions){ + for (CoinJoinClientSession session : deqSessions) { Masternode mnMixing; if ((mnMixing = session.getMixingMasternodeInfo()) != null && mnMixing.getProTxHash().equals(dsq.getProTxHash())) { dsq.setTried(true); diff --git a/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.java b/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.java index f1d5dccf3..943d626d4 100644 --- a/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.java +++ b/core/src/main/java/org/bitcoinj/coinjoin/CoinJoinClientQueueManager.java @@ -111,12 +111,9 @@ public void processDSQueue(Peer from, CoinJoinQueue dsq, boolean enable_bip61) { log.info("coinjoin: DSQUEUE: new {} from mn {}", dsq, dmn.getService().getAddr()); - coinJoinManager.coinJoinClientManagers.values().stream().anyMatch(new Predicate() { - @Override - public boolean test(CoinJoinClientManager coinJoinClientManager) { - return coinJoinClientManager.markAlreadyJoinedQueueAsTried(dsq); - } - }); + coinJoinManager.coinJoinClientManagers.values().stream().anyMatch( + coinJoinClientManager -> coinJoinClientManager.markAlreadyJoinedQueueAsTried(dsq) + ); if (queueLock.tryLock()) { try { diff --git a/core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java b/core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java index 26355a8d7..7477db337 100644 --- a/core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java +++ b/core/src/main/java/org/bitcoinj/coinjoin/utils/CoinJoinManager.java @@ -57,6 +57,7 @@ import org.bitcoinj.evolution.SimplifiedMasternodeListManager; import org.bitcoinj.quorums.ChainLocksHandler; import org.bitcoinj.quorums.QuorumRotationInfo; +import org.bitcoinj.utils.ContextPropagatingThreadFactory; import org.bitcoinj.utils.Threading; import org.bitcoinj.wallet.Wallet; import org.bitcoinj.wallet.WalletEx; @@ -68,6 +69,8 @@ import java.util.ArrayList; import java.util.HashMap; import java.util.concurrent.Executor; +import java.util.concurrent.ExecutorService; +import java.util.concurrent.Executors; import java.util.concurrent.ScheduledExecutorService; import java.util.concurrent.ScheduledFuture; import java.util.concurrent.TimeUnit; @@ -94,6 +97,8 @@ public class CoinJoinManager { private RequestKeyParameter requestKeyParameter; private RequestDecryptedKey requestDecryptedKey; private final ScheduledExecutorService scheduledExecutorService; + private final ExecutorService messageProcessingExecutor = Executors.newFixedThreadPool(5, + new ContextPropagatingThreadFactory("CoinJoin-MessageProcessor")); protected final ReentrantLock lock = Threading.lock("coinjoin-manager"); private boolean finishCurrentSessions = false; @@ -206,6 +211,19 @@ public void stop() { } stopAsync(); finishCurrentSessions = false; + + // Shutdown the message processing executor + messageProcessingExecutor.shutdown(); + try { + if (!messageProcessingExecutor.awaitTermination(10, TimeUnit.SECONDS)) { + log.warn("CoinJoin message processing executor did not terminate in time, forcing shutdown"); + messageProcessingExecutor.shutdownNow(); + } + } catch (InterruptedException e) { + log.warn("Interrupted while waiting for message processing executor to terminate", e); + messageProcessingExecutor.shutdownNow(); + Thread.currentThread().interrupt(); + } } finally { lock.unlock(); } @@ -262,6 +280,10 @@ public void close() { if (masternodeGroup != null) { masternodeGroup.removePreMessageReceivedEventListener(preMessageReceivedEventListener); } + // Ensure executor is shut down + if (!messageProcessingExecutor.isShutdown()) { + messageProcessingExecutor.shutdown(); + } } public boolean isMasternodeOrDisconnectRequested(MasternodeAddress address) { @@ -483,7 +505,15 @@ public void processTransaction(Transaction tx) { } public final PreMessageReceivedEventListener preMessageReceivedEventListener = (peer, m) -> { - if (isCoinJoinMessage(m)) { + if (m instanceof CoinJoinQueue) { + // Offload DSQueue message processing to thread pool to avoid blocking network I/O thread + messageProcessingExecutor.execute(() -> { + processMessage(peer, m); + }); + // Return null as dsq meessages are only processed above + return null; + } else if (isCoinJoinMessage(m)) { + // Process other CoinJoin messages synchronously return processMessage(peer, m); } return m; From 770ef1c31366129a1f573369500b7df63d61cbf8 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Wed, 14 Jan 2026 20:56:42 -0800 Subject: [PATCH 6/9] chore: mark isSimple as deprecated --- core/src/main/java/org/bitcoinj/core/Transaction.java | 1 + 1 file changed, 1 insertion(+) diff --git a/core/src/main/java/org/bitcoinj/core/Transaction.java b/core/src/main/java/org/bitcoinj/core/Transaction.java index f27e2def0..914b73493 100644 --- a/core/src/main/java/org/bitcoinj/core/Transaction.java +++ b/core/src/main/java/org/bitcoinj/core/Transaction.java @@ -1762,6 +1762,7 @@ public void setCoinJoinTransactionType(CoinJoinTransactionType coinJoinTransacti } /* returns false if inputs > 4 or there are less than the required confirmations */ + @Deprecated public boolean isSimple() { if(inputs.size() > MAX_INPUTS_FOR_AUTO_IX) return false; From 47e9a7a9277c0946fabb47678cf5a99676dbee81 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Fri, 16 Jan 2026 13:38:18 -0800 Subject: [PATCH 7/9] fix: update mainnet seeds --- .../org/bitcoinj/params/MainNetParams.java | 250 +++++++++--------- 1 file changed, 127 insertions(+), 123 deletions(-) diff --git a/core/src/main/java/org/bitcoinj/params/MainNetParams.java b/core/src/main/java/org/bitcoinj/params/MainNetParams.java index 6a2c014bf..e7f5cbae1 100644 --- a/core/src/main/java/org/bitcoinj/params/MainNetParams.java +++ b/core/src/main/java/org/bitcoinj/params/MainNetParams.java @@ -114,186 +114,190 @@ public MainNetParams() { 0x3461fad8, 0x2e4beed8, 0x7de8e6d8, - 0x089abdd8, - 0x3ed96bd8, - 0xaef9a8d5, + 0xb9d1dad5, + 0x22d1dad5, + 0x08d1dad5, + 0x7d0fabd5, + 0xdd4d9fd5, 0xd20034d4, - 0x806e18d4, - 0xdf6b18d4, + 0x6a5926d4, + 0xe16818d4, + 0x9a3091d1, + 0xce248dd1, + 0x2aa43ad1, + 0x5b6657d0, 0x28f7f4cf, - 0x6dd5a8ce, - 0xe2d4a8ce, - 0xb2d4a8ce, - 0x90d4a8ce, - 0xcb1205ca, - 0x40d3b5c3, + 0x4f0e47ca, + 0x211005ca, + 0xcb7fd3c6, + 0x307307c6, 0xd25f62c3, - 0xe4479ec2, - 0xd65187c2, + 0xc719eec2, + 0x9854b6c2, + 0x645487c2, 0xd69d05c2, - 0x3295a4c1, - 0x603b1dc1, - 0x15391dc1, + 0xc67fafc0, 0x5706a9c0, - 0x8c5340c0, - 0xb7c4d0bc, - 0xf3ed7fbc, - 0x28e67fbc, - 0x5edf44bc, + 0x43b228bc, 0xdb73f3b9, 0x9c53e4b9, - 0x8b7fd9b9, - 0x760dd8b9, - 0x2218d5b9, - 0xf928b9b9, + 0x8453e4b9, + 0x21eac6b9, + 0x0ceac6b9, + 0xd613c1b9, + 0x6828b9b9, + 0xd438afb9, + 0x9ad9a6b9, 0x75aba5b9, 0xdaa3a4b9, 0x55a3a4b9, - 0x22639bb9, 0x90d48eb9, - 0xc85087b9, - 0x078467b9, - 0x3d9557b9, - 0x91651cb9, + 0x04d88db9, + 0x35f970b9, + 0x31f970b9, + 0x41b66bb8, + 0x87edd7b2, + 0x86edd7b2, 0xe257d0b2, 0xd557d0b2, 0x0c029fb2, - 0xb35b9db2, - 0xb05b9db2, - 0xccfe80b2, + 0xb15b9db2, 0x81793fb2, - 0x75eb3eb2, - 0x107f7eb0, - 0x0f7f7eb0, + 0x568d5db1, + 0x3c8d5db1, 0x914166b0, - 0xcfe922ae, - 0xcee922ae, - 0xcce922ae, - 0xcbe922ae, - 0x7a15f9ad, - 0x151569ac, + 0x76f5d4ad, + 0x51f4ecac, + 0x4642e9ac, + 0xa69168ac, 0x045077a8, 0x10a958a7, + 0xafa558a7, + 0xaa2e58a7, 0x87ea16a5, - 0xdd17859b, + 0x2942aca3, + 0x1fbffaa2, + 0xcfbcfaa2, + 0xf811f6a2, + 0x6823d4a2, + 0x6423d4a2, + 0xd82ccb9f, + 0xa3794b9f, + 0x0ecaad9d, + 0x35903598, 0x06309e96, - 0xb014ef91, - 0x5fcdca8e, - 0x54e41285, + 0x80b72d93, + 0x63672d93, + 0xcb306792, + 0x02611c8b, + 0xbab6ff86, + 0xe2611285, 0xbae9a282, 0xfc783d82, - 0x9fb5c780, 0xa640c17b, - 0x6446eb6d, - 0xaa45eb6d, - 0x5f41eb6d, + 0xf0ee7873, + 0xf643eb6d, + 0xe9f0ad6d, + 0x4a03bd6b, + 0x4acab36b, + 0xa6ccae6b, 0x1609376a, - 0x7423ee68, - 0x7223ee68, - 0xf95fa067, - 0xe15fa067, - 0xdb5fa067, 0x2ec4d35f, + 0x22c4d35f, 0x20c4d35f, - 0x08c4d35f, - 0x2c35b75f, - 0x6234b75f, 0x8d33b75f, - 0xb94c155d, - 0x1f0b895b, + 0x8315ab5f, + 0xa16dac5e, + 0x27ac735d, + 0x25ac735d, + 0xb195c75b, 0x6049b359, - 0x0a137559, - 0xc6694959, - 0x57042859, - 0x56fd6257, - 0xca6bd755, - 0xbef1d155, - 0xbcf1d155, - 0x47f1d155, - 0x23f1d155, + 0x95832359, + 0xfa16e557, + 0x4018e457, + 0x80656b56, + 0xf5f1d155, + 0x5df1d155, + 0x57f1d155, + 0x56f1d155, + 0x0b9bf754, 0xccb3f254, + 0xc40c6254, 0x11320954, - 0xc119d352, + 0xfd09de53, 0x6919d352, + 0xf015d352, 0xb315d352, 0x1715d352, 0x53e6ca52, + 0xc092b452, 0x33fae351, - 0xaaead150, - 0x5f1d8f4f, + 0x57e40e51, + 0x0893f950, + 0xe784f050, + 0x90e6d050, 0x0013534e, - 0x5984e84d, - 0x0484e84d, 0x0463df4d, - 0xf76b3d45, + 0xcc94dd4d, + 0x715a324a, + 0xa0263c48, 0xd76b3d45, + 0x34c4f542, 0x46f3f442, 0x45f3f442, - 0xa6e06e3a, - 0x79ea2536, + 0xaef43c3e, + 0x5b662434, 0xac092134, - 0x52c49f33, 0xeda99e33, - 0x409b4433, - 0x2a750f33, - 0xce600f33, - 0xc538f32f, - 0xa66d6d2f, - 0x1cf1fe2e, - 0x15f1fe2e, + 0xd2e64d33, + 0x3d8e2633, + 0x09f1fe2e, + 0x07f1fe2e, 0x06f1fe2e, 0x04f1fe2e, - 0x20f9fa2e, - 0x091f482e, + 0x0afefa2e, 0xf228242e, - 0xfbbd1e2e, - 0xd6bd1e2e, - 0xd5bd1e2e, - 0xbff10a2e, 0x7fa2042e, 0xc9138c2d, - 0xd95e5b2d, - 0x2aa3562d, + 0x594a842d, + 0x1e9c802d, 0x7a7a532d, - 0xcd284f2d, + 0x6a124f2d, 0xcfa94d2d, - 0x5b534c2d, - 0x689f472d, - 0x6c9e472d, - 0x3a9e472d, - 0x5a6b3f2d, - 0xdd383a2d, - 0x1818212d, - 0x40b60b2d, - 0x9afa082d, - 0x91f8082d, + 0x79ba3d2d, + 0x4f383a2d, + 0xa6e4202d, 0xd663f02c, 0x2e4de52b, - 0xa6684d25, - 0x6863941f, + 0x6df4a72b, + 0x5af0a72b, + 0x91efa72b, + 0x33fba32b, + 0x6c7d6626, + 0x3c526326, + 0x15526326, + 0x5c655b26, + 0xca645b26, + 0xd611fc25, + 0x15e36125, + 0x19b0f622, + 0x9463941f, + 0xbcbe391f, + 0x1fbe391f, 0x24610a1f, - 0xcb00a317, - 0x09f48b12, 0xc06aff05, - 0x1815fc05, - 0x34efbd05, - 0x5091bd05, - 0x2ccab505, - 0x10cab505, + 0xce77e605, + 0xb877e605, + 0xb777e605, + 0xfda4bd05, 0x4f6ea105, 0xf36d4f05, - 0x764a4e05, - 0x6f672305, - 0x4a672305, - 0x40672305, - 0x3a672305, - 0x22ed0905, + 0x19672305, + 0x13672305, + 0x3a490205, 0xbe430205, - 0x39f15203, - 0x41e02303, 0x2378e902, - 0xddd53802, - 0xdcd53802 + 0xdad53802 }; strSporkAddress = "Xgtyuk76vhuFW2iT7UAiHgNdWXCf3J34wh"; From 8cd4a3d599aefd88c389f0cff9f51cd2c05491b8 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Fri, 16 Jan 2026 13:38:36 -0800 Subject: [PATCH 8/9] fix: update mainnet seeds --- tools/src/main/python/nodes_main.txt | 250 ++++++++++++++------------- 1 file changed, 127 insertions(+), 123 deletions(-) diff --git a/tools/src/main/python/nodes_main.txt b/tools/src/main/python/nodes_main.txt index 36aa63efa..da572ba66 100644 --- a/tools/src/main/python/nodes_main.txt +++ b/tools/src/main/python/nodes_main.txt @@ -1,183 +1,187 @@ 216.250.97.52:9999 216.238.75.46:9999 216.230.232.125:9999 -216.189.154.8:9999 -216.107.217.62:9999 -213.168.249.174:9999 +213.218.209.185:9999 +213.218.209.34:9999 +213.218.209.8:9999 +213.171.15.125:9999 +213.159.77.221:9999 212.52.0.210:9999 -212.24.110.128:9999 -212.24.107.223:9999 +212.38.89.106:9999 +212.24.104.225:9999 +209.145.48.154:9999 +209.141.36.206:9999 +209.58.164.42:9999 +208.87.102.91:9999 207.244.247.40:9999 -206.168.213.109:9999 -206.168.212.226:9999 -206.168.212.178:9999 -206.168.212.144:9999 -202.5.18.203:9999 -195.181.211.64:9999 +202.71.14.79:9999 +202.5.16.33:9999 +198.211.127.203:9999 +198.7.115.48:9999 195.98.95.210:9999 -194.158.71.228:9999 -194.135.81.214:9999 +194.238.25.199:9999 +194.182.84.152:9999 +194.135.84.100:9999 194.5.157.214:9999 -193.164.149.50:9999 -193.29.59.96:9999 -193.29.57.21:9999 +192.175.127.198:9999 192.169.6.87:9999 -192.64.83.140:9999 -188.208.196.183:9999 -188.127.237.243:9999 -188.127.230.40:9999 -188.68.223.94:9999 +188.40.178.67:9999 185.243.115.219:9999 185.228.83.156:9999 -185.217.127.139:9999 -185.216.13.118:9999 -185.213.24.34:9999 -185.185.40.249:9999 +185.228.83.132:9999 +185.198.234.33:9999 +185.198.234.12:9999 +185.193.19.214:9999 +185.185.40.104:9999 +185.175.56.212:9999 +185.166.217.154:9999 185.165.171.117:9999 185.164.163.218:9999 185.164.163.85:9999 -185.155.99.34:9999 185.142.212.144:9999 -185.135.80.200:9999 -185.103.132.7:9999 -185.87.149.61:9999 -185.28.101.145:9999 +185.141.216.4:9999 +185.112.249.53:9999 +185.112.249.49:9999 +184.107.182.65:9999 +178.215.237.135:9999 +178.215.237.134:9999 178.208.87.226:9999 178.208.87.213:9999 178.159.2.12:9999 -178.157.91.179:9999 -178.157.91.176:9999 -178.128.254.204:9999 +178.157.91.177:9999 178.63.121.129:9999 -178.62.235.117:9999 -176.126.127.16:9999 -176.126.127.15:9999 +177.93.141.86:9999 +177.93.141.60:9999 176.102.65.145:9999 -174.34.233.207:9999 -174.34.233.206:9999 -174.34.233.204:9999 -174.34.233.203:9999 -173.249.21.122:9999 -172.105.21.21:9999 +173.212.245.118:9999 +172.236.244.81:9999 +172.233.66.70:9999 +172.104.145.166:9999 168.119.80.4:9999 167.88.169.16:9999 +167.88.165.175:9999 +167.88.46.170:9999 165.22.234.135:9999 -155.133.23.221:9999 +163.172.66.41:9999 +162.250.191.31:9999 +162.250.188.207:9999 +162.246.17.248:9999 +162.212.35.104:9999 +162.212.35.100:9999 +159.203.44.216:9999 +159.75.121.163:9999 +157.173.202.14:9999 +152.53.144.53:9999 150.158.48.6:9999 -145.239.20.176:9999 -142.202.205.95:9999 -133.18.228.84:9999 +147.45.183.128:9999 +147.45.103.99:9999 +146.103.48.203:9999 +139.28.97.2:9999 +134.255.182.186:9999 +133.18.97.226:9999 130.162.233.186:9999 130.61.120.252:9999 -128.199.181.159:9999 123.193.64.166:9999 -109.235.70.100:9999 -109.235.69.170:9999 -109.235.65.95:9999 +115.120.238.240:9999 +109.235.67.246:9999 +109.173.240.233:9999 +107.189.3.74:9999 +107.179.202.74:9999 +107.174.204.166:9999 106.55.9.22:9999 -104.238.35.116:9999 -104.238.35.114:9999 -103.160.95.249:9999 -103.160.95.225:9999 -103.160.95.219:9999 95.211.196.46:9999 +95.211.196.34:9999 95.211.196.32:9999 -95.211.196.8:9999 -95.183.53.44:9999 -95.183.52.98:9999 95.183.51.141:9999 -93.21.76.185:9999 -91.137.11.31:9999 +95.171.21.131:9999 +94.172.109.161:9999 +93.115.172.39:9999 +93.115.172.37:9999 +91.199.149.177:9999 89.179.73.96:9999 -89.117.19.10:9999 -89.73.105.198:9999 -89.40.4.87:9999 -87.98.253.86:9999 -85.215.107.202:9999 -85.209.241.190:9999 -85.209.241.188:9999 -85.209.241.71:9999 -85.209.241.35:9999 +89.35.131.149:9999 +87.229.22.250:9999 +87.228.24.64:9999 +86.107.101.128:9999 +85.209.241.245:9999 +85.209.241.93:9999 +85.209.241.87:9999 +85.209.241.86:9999 +84.247.155.11:9999 84.242.179.204:9999 +84.98.12.196:9999 84.9.50.17:9999 -82.211.25.193:9999 +83.222.9.253:9999 82.211.25.105:9999 +82.211.21.240:9999 82.211.21.179:9999 82.211.21.23:9999 82.202.230.83:9999 +82.180.146.192:9999 81.227.250.51:9999 -80.209.234.170:9999 -79.143.29.95:9999 +81.14.228.87:9999 +80.249.147.8:9999 +80.240.132.231:9999 +80.208.230.144:9999 78.83.19.0:9999 -77.232.132.89:9999 -77.232.132.4:9999 77.223.99.4:9999 -69.61.107.247:9999 +77.221.148.204:9999 +74.50.90.113:9999 +72.60.38.160:9999 69.61.107.215:9999 +66.245.196.52:9999 66.244.243.70:9999 66.244.243.69:9999 -58.110.224.166:9999 -54.37.234.121:9999 +62.60.244.174:9999 +52.36.102.91:9999 52.33.9.172:9999 -51.159.196.82:9999 51.158.169.237:9999 -51.68.155.64:9999 -51.15.117.42:9999 -51.15.96.206:9999 -47.243.56.197:9999 -47.109.109.166:9999 -46.254.241.28:9999 -46.254.241.21:9999 +51.77.230.210:9999 +51.38.142.61:9999 +46.254.241.9:9999 +46.254.241.7:9999 46.254.241.6:9999 46.254.241.4:9999 -46.250.249.32:9999 -46.72.31.9:9999 +46.250.254.10:9999 46.36.40.242:9999 -46.30.189.251:9999 -46.30.189.214:9999 -46.30.189.213:9999 -46.10.241.191:9999 46.4.162.127:9999 45.140.19.201:9999 -45.91.94.217:9999 -45.86.163.42:9999 +45.132.74.89:9999 +45.128.156.30:9999 45.83.122.122:9999 -45.79.40.205:9999 +45.79.18.106:9999 45.77.169.207:9999 -45.76.83.91:9999 -45.71.159.104:9999 -45.71.158.108:9999 -45.71.158.58:9999 -45.63.107.90:9999 -45.58.56.221:9999 -45.33.24.24:9999 -45.11.182.64:9999 -45.8.250.154:9999 -45.8.248.145:9999 +45.61.186.121:9999 +45.58.56.79:9999 +45.32.228.166:9999 44.240.99.214:9999 43.229.77.46:9999 -37.77.104.166:9999 -31.148.99.104:9999 +43.167.244.109:9999 +43.167.240.90:9999 +43.167.239.145:9999 +43.163.251.51:9999 +38.102.125.108:9999 +38.99.82.60:9999 +38.99.82.21:9999 +38.91.101.92:9999 +38.91.100.202:9999 +37.252.17.214:9999 +37.97.227.21:9999 +34.246.176.25:9999 +31.148.99.148:9999 +31.57.190.188:9999 +31.57.190.31:9999 31.10.97.36:9999 -23.163.0.203:9999 -18.139.244.9:9999 5.255.106.192:9999 -5.252.21.24:9999 -5.189.239.52:9999 -5.189.145.80:9999 -5.181.202.44:9999 -5.181.202.16:9999 +5.230.119.206:9999 +5.230.119.184:9999 +5.230.119.183:9999 +5.189.164.253:9999 5.161.110.79:9999 5.79.109.243:9999 -5.78.74.118:9999 -5.35.103.111:9999 -5.35.103.74:9999 -5.35.103.64:9999 -5.35.103.58:9999 -5.9.237.34:9999 +5.35.103.25:9999 +5.35.103.19:9999 +5.2.73.58:9999 5.2.67.190:9999 -3.82.241.57:9999 -3.35.224.65:9999 2.233.120.35:9999 -2.56.213.221:9999 -2.56.213.220:9999 +2.56.213.218:9999 \ No newline at end of file From 6d2722b20020c1bc0e2c32d40912a8b0a6388565 Mon Sep 17 00:00:00 2001 From: HashEngineering Date: Fri, 16 Jan 2026 13:38:53 -0800 Subject: [PATCH 9/9] fix: update testnet seeds --- .../java/org/bitcoinj/params/TestNet3Params.java | 14 +++++++++++--- tools/src/main/python/nodes_test.txt | 14 +++++++++++--- 2 files changed, 22 insertions(+), 6 deletions(-) diff --git a/core/src/main/java/org/bitcoinj/params/TestNet3Params.java b/core/src/main/java/org/bitcoinj/params/TestNet3Params.java index 5f55221f2..cc738120d 100644 --- a/core/src/main/java/org/bitcoinj/params/TestNet3Params.java +++ b/core/src/main/java/org/bitcoinj/params/TestNet3Params.java @@ -82,9 +82,17 @@ public TestNet3Params() { // updated with Dash Core 21.0.0 seed list addrSeeds = new int[] { - 0x2e4de52b, - 0xf7a74d2d, - 0xf9cb3eb2 + 0xaef43c3e, + 0xd85ed536, + 0x8992bf36, + 0x5945bc36, + 0x309a5934, + 0x7412a223, + 0x078f5c23, + 0xdac55b23, + 0x59865b23, + 0x1e86dc22, + 0xc31ad222 }; bip32HeaderP2PKHpub = 0x043587cf; diff --git a/tools/src/main/python/nodes_test.txt b/tools/src/main/python/nodes_test.txt index 519e04919..8fdc96e1a 100644 --- a/tools/src/main/python/nodes_test.txt +++ b/tools/src/main/python/nodes_test.txt @@ -1,3 +1,11 @@ -43.229.77.46:19999 -45.77.167.247:19999 -178.62.203.249:19999 +62.60.244.174:19999 +54.213.94.216:19999 +54.191.146.137:19999 +54.188.69.89:19999 +52.89.154.48:19999 +35.162.18.116:19999 +35.92.143.7:19999 +35.91.197.218:19999 +35.91.134.89:19999 +34.220.134.30:19999 +34.210.26.195:19999