-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Sentry Mux #11793
Implement Sentry Mux #11793
Conversation
…gerwatch/erigon into borevent_snapshot_validation
for _, client := range m.clients { | ||
client := client | ||
|
||
g.Go(func() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if the sends should send to all sentries in parallel because we send a message to a peer using its pub key as id and the same peer may have more than 1 sentry running for each protocol.
For example if another Erigon peer is running 2 sentries for eth68 and eth67 that means we will hit that peer with 2 requests instead of 1 and it will have to respond twice for the same reply.
This may be a problem for requests like GetBlockBodies
(we send these for ranges of 1024 blocks for checkpoints) - the peer will have to respond twice instead of once and the responses to these requests are not light.
I'm also noticing that in the current SentryMultiClient implementation it sends to the sentries in a sequential order -> checks if the send succeeded -> if it did it stops, while if it did not succeed it moves on to the next sentry - https://github.com/erigontech/erigon/blob/main/p2p/sentry/sentry_multi_client/sentry_api.go#L88-L91
|
||
g.Go(func() error { | ||
sentPeers, err := client.SendMessageToRandomPeers(gctx, in, opts...) | ||
sentPeers, err := m.clients[peer.clientIndex].SendMessageById(gctx, &sentryproto.SendMessageByIdRequest{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we are not preserving the meaning of SendMessageToRandomPeers
and are making it non random. We probably should do a shuffle of the peers list if their size is > MaxPeers in the multiplexer before this for loop to preserve it
This implements a pass through multiplexer which forward sentry grpc requests to multiple sentry instances.
This change also rationalizes the use of direct vs grpc sentry interfaces. All sentry consumers now work with the base grpc interface rather than the direct implementation.