-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BOUNTY] Community Improvement Bounty - libp2p - Golang gossipsub profiling and optimization #18
Comments
Note - The Community Improvement Bounties are funded by the ETHBerlin fundraising efforts - if your project needs these bounties to work better with libp2p - consider helping out. High incentives make happy hackers- |
Protocol Labs is chipping in 250 EUR towards this bounty. |
The Ethereum Foundation is chipping in 500 EUR for this bounty! |
Looks like I will give this a try since nobody else is picking it up. I'm with the EF working on Go state transition things, not networking, but familiar with libp2p on a surface level. Will dig into it now, wish me luck 🤞 |
Salutations from Señor Tupac, esq. to @protolambda - fingers crossed. |
Hackathon submission: https://github.com/protolambda/go-libp2p-gossip-berlin/ Devpost: https://devpost.com/software/go-libp2p-gossip-berlin It was fun learning the gossipsub spec in more detail, along with go-libp2p usage. Sadly I couldn't find any opportunity to optimize libp2p code, as it seems to be primarily bottlenecked by crypto: verifying gossipsub messages, and hashing for larger messages. |
gossipsub profiling and optimization
Hackers, find and fix bottlenecks and performance hotspots in the Go implementation of gossipsub, and win 1750 EUR! 🤑
See devgrant 7 in the libp2p/devgrants tracker: libp2p/devgrants#7
How to qualify
Do a first pass eyeballing the code to try to spot obvious CPU or allocation bottlenecks.
Find a way to deploy a small cluster of gossipsub nodes, e.g. 20 nodes. Start with multiple nodes inside a single process. Note that we're not trying to benchmark gossipsub at scale; we're attempting to stress a limited number of instances to uncover bottlenecks and hotspots.
Make sure you control resource allocation and sandbox them appropriately. Connect those nodes in different patterns and subject them to stress.
Extract CPU profiles, heap dumps, allocation traces, etc. via pprof, and analyse them manually to spot problems.
Extra points if you can correlate your findings with message deliverability and time to delivery in a Cumulative Distribution Function. We love charts!
Open issues under go-libp2p-pubsub for each bottleneck/hotspot you spot, attaching charts and evidence.
Very important: submit fixes for the things you find! If you can measure how your changes impacted the performance of gossipsub, top-notch stuff! 👍
Resources
See https://github.com/ethberlinzwei/KnowledgeBase/blob/master/resources/libp2p.md, and use @raulk as a walking encyclopedia for all things libp2p..
Judging Criteria
Prizes
The text was updated successfully, but these errors were encountered: