-
Notifications
You must be signed in to change notification settings - Fork 214
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bank vat updates for hundreds of accounts in a block is slow #3629
Comments
Backpressure is a useful goal, but this particular case of Cosmos bank sends resulting in balance notifications to the JS layer is probably not going to be reduced, especially on a production chain.
@dckc and I thought that we could have a way to optimise that involves polling from JS to the Cosmos If we informed the Cosmos layer when the JS layer is waiting on a fresh balance update, and dropped updates whose JS clients are not keeping up, we could reduce the amount of traffic. Especially in the case where the JS layer is not waiting (i.e. there is no JS client because it hasn't been provisioned, or the JS client is not currently running).
@JimLarson, @warner, @erights, any thoughts on this solution? |
Up to 5000 accounts updated per deliverya bit more data; see also #3459 (comment) In just the first 100 deliveries in blocks 68817 to 69707:
|
Decision: Change the vBank notifier to have a smaller (than unlimited!) batch size. Aim for a few KB, 10 KB max. A KB is about 200 accounts being updated. Want to do about 10 - 100 pieces of work per message. |
test scenario: do a bunch of cosmos-level sends. |
Did some testing - created a single Tx that sent to 5K destination addresses on ollinet. At the time, typical time between blocks was 5-6 seconds. One or two blocks after one of these big Tx's, there was a single block where time went up to almost 20 seconds, then went back to normal. The transaction was over 1MB (in JSON) and racked up 100 million in gas. The command-line tools balked at creating a Tx with 10k sends in it. My thought is that this is a whole lot of effort for a pretty mild disruption, and a nominal Tx size fee might be enough to dissuade such an attack - or at least make it less attractive than other avenues for causing grief. |
Thanks for testing, @JimLarson . @Tartuffo @dtribble and @arirubinstein concur that this level of performance is OK. |
Describe the bug
With a simulated threshold of 8M computrons per block, we project deliveries from agorictest-16 may have been delayed up to 72 blocks #3459 (comment) . Looking into what's going on in those blocks @michaelfig and I saw traffic with a few hundred account updates per block.
Without a limit on compute per block (#3459 ), this caused blocks up to 32 seconds.
To Reproduce
Presumably the agorictest-16 behavior is reproducible.
Expected behavior
???
Should there be back-pressure that causes the transactions to sit longer in the mempool or get dropped?
Or can we optimize this situation?
Platform Environment
agorictest-16
Additional context
#3459
cc @warner
The text was updated successfully, but these errors were encountered: