Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 9 additions & 9 deletions src/net_processing.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -814,7 +814,7 @@ class PeerManagerImpl final : public PeerManager

/** Protects m_peer_map. This mutex must not be locked while holding a lock
* on any of the mutexes inside a Peer object. */
mutable Mutex m_peer_mutex;
mutable SharedMutex m_peer_mutex;
Copy link

@coderabbitai coderabbitai bot Oct 21, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

SharedMutex swap looks good; confirm macro support/fairness.

Change to SharedMutex is appropriate for read-heavy access. Please confirm:

  • READ_LOCK(...) is defined by our sync wrappers and mapped to a shared-lock type.
  • No writer starvation policy surprises on our platforms (clang/gcc toolchains).

🏁 Script executed:

#!/bin/bash
# Ensure shared-lock macros/types exist and are used consistently
rg -nP '(^|[^A-Z_])READ_LOCK\s*\(' --type=cpp
rg -nP 'class|using\s+SharedMutex|struct\s+SharedMutex' --type=cpp
rg -nP 'LOCK_SHARED\s*\(' --type=cpp

Length of output: 205545


I can confirm the macro support is in place. Now I need to verify the actual code at the flagged lines to assess the lifetime hazard mentioned in the scratchpad.

I need to verify the actual code at the flagged lines to assess the lifetime hazard mentioned in the scratchpad.


🏁 Script executed:

#!/bin/bash
# Inspect RelayAddress and _RelayTransaction code context for potential UAF issues
echo "=== RelayAddress context (line 2515-2528) ===" 
sed -n '2510,2535p' src/net_processing.cpp

echo ""
echo "=== _RelayTransaction context (line 2463-2471) ===" 
sed -n '2458,2480p' src/net_processing.cpp

echo ""
echo "=== Check if PeerRef is defined/used ===" 
rg -n "struct PeerRef|class PeerRef|using PeerRef" src/net_processing.cpp

Length of output: 1797


🏁 Script executed:

#!/bin/bash
# Inspect RelayAddress and _RelayTransaction code context for potential UAF issues
echo "=== RelayAddress context (line 2515-2528) ===" 
sed -n '2510,2535p' src/net_processing.cpp

echo ""
echo "=== _RelayTransaction context (line 2463-2471) ===" 
sed -n '2458,2480p' src/net_processing.cpp

echo ""
echo "=== Check if PeerRef is used and defined ===" 
rg -n "PeerRef|using.*Ref" src/net_processing.cpp | head -20

Length of output: 2935


Fix the lifetime hazard in RelayAddress where raw peer pointers are used after releasing the lock.

The READ_LOCK(...) macro is properly supported (defined at src/sync.h:343 and mapped to ReadLock<decltype(cs)> on std::shared_mutex). However, RelayAddress stores raw Peer* pointers in the best array at line 2522, then uses them after the lock scope ends at line 2532. If a peer is removed from m_peer_map between lock release and the PushAddress call, this becomes a use-after-free.

Lines 2515–2528 (RelayAddress):
Capture PeerRef (shared_ptr, line 439) instead of raw pointers to extend peer lifetime beyond the lock. Replace peer.get() with the shared_ptr directly, or collect into std::vector<PeerRef> like AskPeersForTransaction does (line 2310).

The other READ_LOCK sites (_RelayTransaction, RelayRecoveredSig) are safe because all peer access occurs within the lock scope.

🤖 Prompt for AI Agents
src/net_processing.cpp lines 2515-2528: RelayAddress currently collects raw
Peer* pointers into `best` while holding the READ_LOCK, then uses them after the
lock is released causing a potential use-after-free; change the collection to
capture PeerRef (the shared_ptr alias used for peers) instead of raw pointers
(e.g., gather a std::vector<PeerRef> while inside the READ_LOCK), replace later
uses of `peer.get()` with the stored PeerRef, and then call PushAddress/other
post-lock APIs using the preserved PeerRef so peers remain alive after the lock
is released.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please look at this again;

For the first case, it appears to me the READ_LOCK is held through the destruction of the best array. I don't see a UAF.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops, something went wrong! Please try again later. 🐰 💔

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please look at this again;

For the first case, it appears to me the READ_LOCK is held through the destruction of the best array. I don't see a UAF.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@PastaPastaPasta You’re right — in RelayAddress the READ_LOCK(m_peer_mutex) is acquired in the function scope and remains held through both loops, including the PushAddress calls. With the shared lock held, m_peer_map cannot be modified by writers, so the Peer* captured into best stays valid during its use. No UAF there. My earlier concern was unfounded — thanks for the nudge.

Optional: if you ever want to reduce the time m_peer_mutex is held, you could collect PeerRef entries under the read lock, release it, then call PushAddress on those refs. Not necessary for correctness here, just a potential micro-optimization.

/**
* Map of all Peer objects, keyed by peer id. This map is protected
* by the m_peer_mutex. Once a shared pointer reference is
Expand Down Expand Up @@ -1723,7 +1723,7 @@ void PeerManagerImpl::FinalizeNode(const CNode& node) {

PeerRef PeerManagerImpl::GetPeerRef(NodeId id) const
{
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
auto it = m_peer_map.find(id);
return it != m_peer_map.end() ? it->second : nullptr;
}
Expand Down Expand Up @@ -2292,7 +2292,7 @@ bool PeerManagerImpl::AlreadyHaveBlock(const uint256& block_hash)

void PeerManagerImpl::SendPings()
{
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
for(auto& it : m_peer_map) it.second->m_ping_queued = true;
}

Expand All @@ -2302,7 +2302,7 @@ void PeerManagerImpl::AskPeersForTransaction(const uint256& txid)
peersToAsk.reserve(4);

{
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
// TODO consider prioritizing MNs again, once that flag is moved into Peer
for (const auto& [_, peer] : m_peer_map) {
if (peersToAsk.size() >= 4) {
Expand Down Expand Up @@ -2358,7 +2358,7 @@ void PeerManagerImpl::RelayInv(const CInv& inv, const int minProtoVersion)

void PeerManagerImpl::RelayInv(const CInv& inv)
{
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
for (const auto& [_, peer] : m_peer_map) {
if (!peer->GetInvRelay()) continue;
PushInv(*peer, inv);
Expand All @@ -2370,7 +2370,7 @@ void PeerManagerImpl::RelayDSQ(const CCoinJoinQueue& queue)
CInv inv{MSG_DSQ, queue.GetHash()};
std::vector<NodeId> nodes_send_all;
{
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
for (const auto& [nodeid, peer] : m_peer_map) {
switch (peer->m_wants_dsq) {
case Peer::WantsDSQ::NONE:
Expand Down Expand Up @@ -2453,7 +2453,7 @@ void PeerManagerImpl::RelayTransaction(const uint256& txid)
void PeerManagerImpl::_RelayTransaction(const uint256& txid)
{
const CInv inv{m_dstxman.GetDSTX(txid) ? MSG_DSTX : MSG_TX, txid};
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
for(auto& it : m_peer_map) {
Peer& peer = *it.second;
auto tx_relay = peer.GetTxRelay();
Expand All @@ -2466,7 +2466,7 @@ void PeerManagerImpl::_RelayTransaction(const uint256& txid)
void PeerManagerImpl::RelayRecoveredSig(const uint256& sigHash)
{
const CInv inv{MSG_QUORUM_RECOVERED_SIG, sigHash};
LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);
for (const auto& [_, peer] : m_peer_map) {
if (peer->m_wants_recsigs) {
PushInv(*peer, inv);
Expand Down Expand Up @@ -2504,7 +2504,7 @@ void PeerManagerImpl::RelayAddress(NodeId originator,
std::array<std::pair<uint64_t, Peer*>, 2> best{{{0, nullptr}, {0, nullptr}}};
assert(nRelayNodes <= best.size());

LOCK(m_peer_mutex);
READ_LOCK(m_peer_mutex);

for (auto& [id, peer] : m_peer_map) {
if (peer->m_addr_relay_enabled && id != originator && IsAddrCompatible(*peer, addr)) {
Expand Down
Loading