-
Notifications
You must be signed in to change notification settings - Fork 624
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Block Delay Parameter #171
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, what does the upgrade process for this look like via the x/upgrade module? Will chains need to add the setting of this parameter into their scripts?
|
||
// DefaultTimePerBlock is the default value for maximum time | ||
// expected per block. | ||
const DefaultTimePerBlock = time.Hour |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there reasoning behind this default? I'd expect time.Second * 5
or maybe time.Second * 10
. 1 hour doesn't seem like a sensible default
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the maximum time expected per block so we want it to be much larger than normal but not too much larger than normal. IDK what a sensible default here is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the default, but it can be set by governance, correct?
I think a minute is a sane default, an hour is certainly safer but probably unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the default, but it can be set by governance, correct?
Correct, although I don't expect all chains to deviate from the default.
The higher the max expected block time, the smaller the block delay will be correct? One minute sounds good to me. For comparison, a 30 minute packet delay would result in a 30 block delay for one minute and 1 block delay for one hour. Is a longer time safer? I think a longer time per block just guards against bad UX in the case the average block time is greater than the expected block delay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, now I'm a little confused. The goal is to ensure that we don't allow packets to be confirmed before enough blocks have passed for evidence to be included. If we want the block delay and the time delay to "match", we should calculate the block delay from the time delay in a way which ensures that at least X
blocks pass with an expected time of some t
per block, for a time delay X * t
- so maybe what we want is minimum time per block, not maximum?
Maybe I misunderstand though, @AdityaSripal can you clarify the calculation intended here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above reasoning makes sense to me, but it also assumes non-malicious chain halts. In the presence of a > 1/3 malicious validator set on the receiving chain which intentionally halts the chain for a set amount of time, it appears they could always engineer a solution which gets the packet processed before misbehaviour submission unless there exists mempool prioritization. Is this correct reasoning? The block delay above is only protecting against non-malicious chain halts?
We should ensure that the name of the variable and value of it match up. MaxTimePerBlock
on the hub seems like it would be 1 minute to me. If the MaxTimePerBlock
is really MaxTimePerBlock
* 50, this needs to be well documented, we cannot assume chains fully understand what this parameter is doing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above reasoning makes sense to me, but it also assumes non-malicious chain halts. In the presence of a > 1/3 malicious validator set on the receiving chain which intentionally halts the chain for a set amount of time, it appears they could always engineer a solution which gets the packet processed before misbehaviour submission unless there exists mempool prioritization. Is this correct reasoning? The block delay above is only protecting against non-malicious chain halts?
This is if the executing chain becomes malicious. If the chain you are on becomes malicious all bets are off. They can do whatever they want, there's no point trying to mitigate that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is if the executing chain becomes malicious. If the chain you are on becomes malicious all bets are off. They can do whatever they want, there's no point trying to mitigate that.
There's an important distinction here though. 1/3 malicious on the executing chain doesn't result in "all bets are off". It certainly can cause problems via halting, but I believe 2/3 malicious is the "all bets are off" magic number. If 1/3 could perform a profitable attack by halting the chain and colluding with a counterparty, I still see this as an attack that should be documented. Halting the chain is not misbehaviour right? So we could have 1/3 malicious halt the chain without consequences and the rest of the validators be trustworthy. Maybe after the attack folks undelegate from the 1/3 and they are no longer in the validator set, but they still performed a successful attack without any slashing
From my understanding, ordering the transactions could still occur with a 1/3 attack. If the 1/3 have a good setup, they could halt the chain when one of their validators is proposing the block. They could wait the trusting period out and then order the transactions such that no misbehavior submission is allowed in the block that is being processed after the halt. There doesn't seem to be anything atm that we can do about this, but it does seem like a possible 1/3 attack that could be performed by a validator set colluding with a counterparty validator set (or potentially itself in the case of cross chain validation). A block delay > 1 seems like the best defense against this attack atm (and perhaps this should be recommended in our user facing documents when deciding upon a connection delay time)
Am I missing something? Obviously this sort of attack requires a very advanced setup and solid understanding of IBC/TM, but I wouldn't say its impossible. It's even potentially incentivized (if the channel has more funds then the amount the counterparty is slashed) since there is no penalty for halting
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1/3 malicious results in arbitrary censorship of messages.
2/3 malicious results in arbitrary inclusion of messages.
1/3+ malicious can stop any misbehaviour from being submitted to a block by simply refusing to commit to any block that includes the misbehaviour. They don't have to halt the chain. The chain can keep going indefinitely with a commit only happening for a block that doesn't include the messages they don't want. Nothing we write into the state machine (time/block delay) saves us from this. The message can be censored indefinitely.
This is true of any messages, you can't even slash a malicious validator since slashing msgs can be censored.
Once you have >1/3 malicious colluding validators the chain is broken. There is no tiered security model that we can safely reason about.
As an application on top of Tendermint, we shouldn't worry about what happens once we break the Tendermint security model. It is the executing chain's responsibility to make sure that never happens.
The processed height and iteration keys are missing from the proposal handle (processed time is already set) I believe these need to be set for upgade as well? Might be useful to make a |
Co-authored-by: colin axnér <[email protected]>
Codecov Report
@@ Coverage Diff @@
## main #171 +/- ##
==========================================
+ Coverage 79.91% 80.01% +0.09%
==========================================
Files 107 109 +2
Lines 6409 6474 +65
==========================================
+ Hits 5122 5180 +58
- Misses 929 934 +5
- Partials 358 360 +2
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK
This pr needs a changelog entry and updated migration docs (chains should be aware they need to set a value here and how to set the value). A nice-to-have is connection delay docs explaining the time delay and block delay in more detail, but this can be done in a followup
We also should test this feature manually. We should test setting it and not setting it during upgrade testing
Better cache optimizing Dockerfile
Description
closes: #100 💯
Before we can merge this PR, please make sure that all the following items have been
checked off. If any of the checklist items are not applicable, please leave them but
write a little note why.
docs/
) or specification (x/<module>/spec/
)godoc
comments.Unreleased
section inCHANGELOG.md
Files changed
in the Github PR explorerCodecov Report
in the comment section below once CI passes