Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

network: handle empty wsPeer supplied to transaction handler #6195

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

algorandskiy
Copy link
Contributor

Summary

There is a race between pubsub new peer discovery and wsPeer registration:

{"time":"2024-12-04T16:42:43.237595Z","log":"[signal SIGSEGV: segmentation violation code=0x1 addr=0xa0 pc=0x1a46d72]"}
{"time":"2024-12-04T16:42:43.237610Z","log":"goroutine 1012678170 [running]:"}
{"time":"2024-12-04T16:42:43.237617Z","log":"github.com/algorand/go-algorand/network.(*wsPeer).RoutingAddr(0xc02a57b588?)"}
{"time":"2024-12-04T16:42:43.237623Z","log":"\tgithub.com/algorand/go-algorand/network/wsPeer.go:387 +0x12"}
{"time":"2024-12-04T16:42:43.237628Z","log":"github.com/algorand/go-algorand/data.(*TxHandler).incomingTxGroupAppRateLimit(0xc0000fec60, {0xc0b02a6008, 0x1, 0x2}, {0x2c51360, 0x0})"}
{"time":"2024-12-04T16:42:43.237634Z","log":"\tgithub.com/algorand/go-algorand/data/txHandler.go:722 +0xcd"}

Suggested fix is to use gsPeer temporary type good enough for tx handler.

Additional fixes:

  • Fix wsPeer's closers potential data dace by adding a mutex controlling access to it
  • Use Peer instead of wsPeer for broadcastRequest.except comparison so that get rid of runtime type cast.

Test Plan

Added a test confirming txTopicValidator does not call tx handler with an empty wsPeer.

Copy link

codecov bot commented Dec 10, 2024

Codecov Report

Attention: Patch coverage is 66.66667% with 6 lines in your changes missing coverage. Please review.

Project coverage is 51.91%. Comparing base (b7b3e5e) to head (08d91d6).
Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
network/p2pNetwork.go 55.55% 4 Missing ⚠️
network/wsPeer.go 50.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #6195      +/-   ##
==========================================
+ Coverage   51.88%   51.91%   +0.02%     
==========================================
  Files         639      639              
  Lines       85489    85495       +6     
==========================================
+ Hits        44359    44382      +23     
+ Misses      38320    38301      -19     
- Partials     2810     2812       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

network/p2pNetwork.go Outdated Show resolved Hide resolved
@@ -282,6 +282,8 @@ type wsPeer struct {

// closers is a slice of functions to run when the peer is closed
closers []func()
// closersMu synchronizes access to closers
closersMu deadlock.RWMutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh interesting, so you spotted a race in wsNetwork when txHandler is processing multiple messages from the same peer

@gmalouf gmalouf removed their assignment Dec 10, 2024
@@ -979,6 +976,8 @@ L:
}

}
wp.closersMu.RLock()
defer wp.closersMu.RUnlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch

@@ -1115,6 +1114,9 @@ func (wp *wsPeer) sendMessagesOfInterest(messagesOfInterestGeneration uint32, me
}

func (wp *wsPeer) OnClose(f func()) {
wp.closersMu.Lock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly related to this PR, but its' odd to me that the same type has both a Close() and OnClose() with minimal overlap.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wsPeer.OnClose(f) actually means "register a function f to be called when wsPeer closes."

Copy link
Contributor

@gmalouf gmalouf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm generally okay with this pending questions CCE asked.

I'd probably add one additional sentence or a title tweak highlighting the problem being solved (it's a race condition that folks ran into on mainnet as I recall, just took some digging to find).

@algorandskiy
Copy link
Contributor Author

What does the gs in gsPeer stand for?

gossip or gossipSub

}

func (p *gsPeer) RoutingAddr() []byte {
return []byte(p.peerID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some day, we could consider calling net.Network().ConnsToPeer(p.peerID), and then getting an IP address here, but it seems like an expensive thing to do per-message, and we are likely going to have a wsPeer conn appear soon anyway that we can get the IP address from.

@@ -1401,7 +1401,7 @@ func (wn *msgBroadcaster) innerBroadcast(request broadcastRequest, prio bool, pe
if wn.config.BroadcastConnectionsLimit >= 0 && sentMessageCount >= wn.config.BroadcastConnectionsLimit {
break
}
if peer == request.except {
if Peer(peer) == request.except {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comparison still works right? I forgot we are just doing pointer comparison for except.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a standalone test but happy to write a unit test for this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants