-
Notifications
You must be signed in to change notification settings - Fork 3.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
op-batcher: Move decision about data availability type to channel submission time #12002
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should definitely add a few unit tests of TxData()
and the requeue function (currently called Rebuild
).
This comment was marked as off-topic.
This comment was marked as off-topic.
If the Data Availability type is determined at submission time, how are |
This comment was marked as off-topic.
This comment was marked as off-topic.
In dynamic DA mode, the batcher maintains two configs (one for each DA type blobs and calldata). One of those configs is the default (i.e. initially blobs, but generally whatever was used when the previous channel was submitted). The channel can then be built optimistically with the default config. When it comes to submit the channel, if the data availability type needs to change, channels which haven't been submitted will be discarded and rebuilt with the appropriate config (including params like |
Yeah it makes sense, thank you! |
Worth adding that we're effectively still choosing the config at the same point in time when we create the channel. That's because right after a channel is being submitted, a new channel is created, and then we choose the config that was just selected 2 seconds ago. |
This comment was marked as off-topic.
This comment was marked as off-topic.
instead of channel creation time Also, cache the ChannelConfig whenever we switch DA type so it is used by default for new channels
9a46625
to
cbf5751
Compare
do not return nextTxData from channel which was discarded by requeue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for adding the additional tests!
Semgrep found 1 Named return arguments to functions must be appended with an underscore ( |
replace with io.EOF as before
5da2bce
to
510464d
Compare
Semgrep found 2 Inputs to functions must be prepended with an underscore ( |
…mission time (ethereum-optimism#12002) * tidy up godoc * move data availability config decision to channel submission time instead of channel creation time Also, cache the ChannelConfig whenever we switch DA type so it is used by default for new channels * fix test * formatting changes * respond to PR comments * add unit test for Requeue method * reduce number of txs in test block * improve test (more blocks in queue) * hoist pending tx management up * wip * tidy up test * wip * fix * refactor to do requeue before calling nextTxData * introduce ErrInsufficientData do not return nextTxData from channel which was discarded by requeue * run test until nonzero data is returned by TxData * break up and improve error logic * fix test to anticipate ErrInsufficientData * after requeuing, call nextTxData again * remove unecessary checks * move err declaration to top of file * add some comments and whitespace * hoist lock back up to TxData * rename variable to blocksToRequeue * remove panic * add comment * use deterministic rng and nonecompressor in test * test: increase block size to fill channel more quickly * remove ErrInsufficientData replace with io.EOF as before * tidy up * typo
Closes #11609
For now, I decided to not make any changes to the decision logic itself (only to when it is triggered and what happens when a change is necessary).
Changing the trigger for the decision opens the opportunity to make a more accurate decision about which DA type to choose, since some transaction data is now in scope. However, this would not address the larger assumption in the current logic, which is about how many blobs are typically included in a tx. This can be lower than the target number on chains with low throughput as channels must be closed before they time out. To address this assumption, we could store the previous number of blobs per tx to make a better estimate than the current one (which assumes we always include the target number). This is saved for future work.
Tests: the modified implementation passes existing tests, including an end to end test for switching DA type.