-
Notifications
You must be signed in to change notification settings - Fork 732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Runtime] Bound XCMP queue #2302
Conversation
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
cumulus/parachains/runtimes/assets/asset-hub-polkadot/src/lib.rs
Outdated
Show resolved
Hide resolved
Co-authored-by: Francisco Aguirre <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Implementation looks good and the intentions of this PR are clear, just a few questions about errors and tests.
cumulus/parachains/runtimes/bridge-hubs/bridge-hub-polkadot/src/lib.rs
Outdated
Show resolved
Hide resolved
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
The CI pipeline was cancelled due to failure one of the required jobs. |
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
@@ -890,7 +938,14 @@ impl<T: Config> XcmpMessageSource for Pallet<T> { | |||
let pruned = old_statuses_len - statuses.len(); | |||
// removing an item from status implies a message being sent, so the result messages must | |||
// be no less than the pruned channels. | |||
statuses.rotate_left(result.len().saturating_sub(pruned)); | |||
|
|||
// TODO <https://github.com/paritytech/parity-common/pull/800> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be done now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Audit completed, going to merge now. |
Reverts #2302. 🤦♂️ should have checked the migration CI first. We either need to reduce the `max_message_size` for the open HRMP channels on the failing chains or increase the `PageSize` of the XCMP queue. Both would be fine on a test-net, but i assume this will also fail before the next SP runtime upgrade so first need to think what best to do. AFAIK its not possible currently to change the `max_message_size` of an open HRMP channel.
Remove `without_storage_info` from the XCMP queue pallet. Part of #323 Changes: - Limit the number of channels that can be suspended at the same time. - Limit the number of channels that can have messages or signals pending at the same time. A No-OP migration is put in place to ensure that all `BoundedVec`s still decode and not truncate after upgrade. The storage version is thereby bumped to 4 to have our tooling remind us to deploy that migration. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
Re-applying #2302 after increasing the `MaxPageSize`. Remove `without_storage_info` from the XCMP queue pallet. Part of #323 Changes: - Limit the number of messages and signals a HRMP channel can have at most. - Limit the number of HRML channels. A No-OP migration is put in place to ensure that all `BoundedVec`s still decode and not truncate after upgrade. The storage version is thereby bumped to 5 to have our tooling remind us to deploy that migration. ## Integration If you see this error in your try-runtime-cli: ```pre Max message size for channel is too large. This means that the V5 migration can be front-run and an attacker could place a large message just right before the migration to make other messages un-decodable. Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max: 102400, MaxPageSize: 65535 ``` Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to something like this: ```rust type MaxPageSize = ConstU32<{ 103 * 1024 }>; ``` There is currently no easy way for on-chain governance to adjust the HRMP max message size of all channels, but it could be done: #3145. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
Re-applying paritytech#2302 after increasing the `MaxPageSize`. Remove `without_storage_info` from the XCMP queue pallet. Part of paritytech#323 Changes: - Limit the number of messages and signals a HRMP channel can have at most. - Limit the number of HRML channels. A No-OP migration is put in place to ensure that all `BoundedVec`s still decode and not truncate after upgrade. The storage version is thereby bumped to 5 to have our tooling remind us to deploy that migration. ## Integration If you see this error in your try-runtime-cli: ```pre Max message size for channel is too large. This means that the V5 migration can be front-run and an attacker could place a large message just right before the migration to make other messages un-decodable. Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max: 102400, MaxPageSize: 65535 ``` Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to something like this: ```rust type MaxPageSize = ConstU32<{ 103 * 1024 }>; ``` There is currently no easy way for on-chain governance to adjust the HRMP max message size of all channels, but it could be done: paritytech#3145. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
Re-applying paritytech#2302 after increasing the `MaxPageSize`. Remove `without_storage_info` from the XCMP queue pallet. Part of paritytech#323 Changes: - Limit the number of messages and signals a HRMP channel can have at most. - Limit the number of HRML channels. A No-OP migration is put in place to ensure that all `BoundedVec`s still decode and not truncate after upgrade. The storage version is thereby bumped to 5 to have our tooling remind us to deploy that migration. ## Integration If you see this error in your try-runtime-cli: ```pre Max message size for channel is too large. This means that the V5 migration can be front-run and an attacker could place a large message just right before the migration to make other messages un-decodable. Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max: 102400, MaxPageSize: 65535 ``` Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to something like this: ```rust type MaxPageSize = ConstU32<{ 103 * 1024 }>; ``` There is currently no easy way for on-chain governance to adjust the HRMP max message size of all channels, but it could be done: paritytech#3145. --------- Signed-off-by: Oliver Tale-Yazdi <[email protected]> Co-authored-by: Francisco Aguirre <[email protected]>
Remove
without_storage_info
from the XCMP queue pallet. Part of #323Changes:
A No-OP migration is put in place to ensure that all
BoundedVec
s still decode and not truncate after upgrade. The storage version is thereby bumped to 4 to have our tooling remind us to deploy that migration.