Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Runtime] Bound XCMP queue #3952

Merged
merged 29 commits into from
May 16, 2024
Merged

[Runtime] Bound XCMP queue #3952

merged 29 commits into from
May 16, 2024

Conversation

ggwpez
Copy link
Member

@ggwpez ggwpez commented Apr 2, 2024

Re-applying #2302 after increasing the MaxPageSize.

Remove without_storage_info from the XCMP queue pallet. Part of #323

Changes:

  • Limit the number of messages and signals a HRMP channel can have at most.
  • Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all BoundedVecs still decode and not truncate after upgrade. The storage version is thereby bumped to 5 to have our tooling remind us to deploy that migration.

Integration

If you see this error in your try-runtime-cli:

Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535

Then increase the MaxPageSize of the cumulus_pallet_xcmp_queue to something like this:

type MaxPageSize = ConstU32<{ 103 * 1024 }>;

There is currently no easy way for on-chain governance to adjust the HRMP max message size of all channels, but it could be done: #3145.

ggwpez and others added 14 commits April 2, 2024 19:29
Remove `without_storage_info` from the XCMP queue pallet. Part of
#323

Changes:
- Limit the number of channels that can be suspended at the same time.
- Limit the number of channels that can have messages or signals pending
at the same time.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 4 to have our tooling remind us to deploy that migration.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
ggwpez added 3 commits April 10, 2024 17:18
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
ggwpez added 3 commits April 10, 2024 17:46
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
@ggwpez ggwpez added the T14-system_parachains This PR/Issue is related to system parachains. label Apr 10, 2024
ggwpez added 2 commits April 10, 2024 19:13
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
@ggwpez ggwpez marked this pull request as ready for review April 10, 2024 19:46
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/migration/v5.rs Outdated Show resolved Hide resolved
cumulus/pallets/xcmp-queue/src/lib.rs Outdated Show resolved Hide resolved
@ggwpez ggwpez requested a review from franciscoaguirre April 22, 2024 10:09
@acatangiu acatangiu requested a review from bkontur April 22, 2024 11:48
Signed-off-by: Oliver Tale-Yazdi <[email protected]>
@muharem muharem self-requested a review April 23, 2024 13:47

/// Any signal messages waiting to be sent.
#[pallet::storage]
pub(super) type SignalMessages<T: Config> =
StorageMap<_, Blake2_128Concat, ParaId, Vec<u8>, ValueQuery>;
StorageMap<_, Blake2_128Concat, ParaId, WeakBoundedVec<u8, T::MaxPageSize>, ValueQuery>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if MaxPageSize affects the weights a lot. From what I can read below, only one signal for every para id. The size can be constant, if it really make any difference.
I assume we could even use a Signal type here, but that would require a migration.

Copy link
Member Author

@ggwpez ggwpez May 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea true. In the past, there was the possibility for multiple signals to be enqueued, but i removed that, since later signals completely overwrite any previous ones.
Now indeed it is wasting weight here... it could probably be changed to something very small, like 16 without an issue. But it could also require a migration, so with the Weak vector and large size we at least are at no risk.

@ggwpez ggwpez added this pull request to the merge queue May 16, 2024
Merged via the queue into master with commit 4adfa37 May 16, 2024
147 of 150 checks passed
@ggwpez ggwpez deleted the oty-reapply-2302 branch May 16, 2024 11:07
hitchhooker pushed a commit to ibp-network/polkadot-sdk that referenced this pull request Jun 5, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Aug 2, 2024
Re-applying paritytech#2302 after increasing the `MaxPageSize`.  

Remove `without_storage_info` from the XCMP queue pallet. Part of
paritytech#323

Changes:
- Limit the number of messages and signals a HRMP channel can have at
most.
- Limit the number of HRML channels.

A No-OP migration is put in place to ensure that all `BoundedVec`s still
decode and not truncate after upgrade. The storage version is thereby
bumped to 5 to have our tooling remind us to deploy that migration.

## Integration

If you see this error in your try-runtime-cli:  
```pre
Max message size for channel is too large. This means that the V5 migration can be front-run and an
attacker could place a large message just right before the migration to make other messages un-decodable.
Please either increase `MaxPageSize` or decrease the `max_message_size` for this channel. Channel max:
102400, MaxPageSize: 65535
```

Then increase the `MaxPageSize` of the `cumulus_pallet_xcmp_queue` to
something like this:
```rust
type MaxPageSize = ConstU32<{ 103 * 1024 }>;
```

There is currently no easy way for on-chain governance to adjust the
HRMP max message size of all channels, but it could be done:
paritytech#3145.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
magecnion added a commit to freeverseio/laos that referenced this pull request Sep 12, 2024
asiniscalchi added a commit to freeverseio/laos that referenced this pull request Nov 29, 2024
* uplift to stable2407

* paritytech/polkadot-sdk#4831

* paritytech/polkadot-sdk#3820

* polkadot-evm/frontier#1253

* paritytech/polkadot-sdk#3952

* paritytech/polkadot-sdk#3872

* fix rpc_builder type

* paritytech/polkadot-sdk#4410

* paritytech/polkadot-sdk#4097

* use stable rust toolchain

* paritytech/polkadot-sdk#3964

* cargo fmt

* clippy and lint issues

* e2e-test estimate gas when delegating

* only run staking e2e-test so it's easier for reviewing by external

* only run staking e2e-test so it's easier for reviewing by external

* restore run all e2e tests

* Update Rust Toolchain to 1.77, Suppress Warnings, and Enhance Runtime (#778)

* using runt 1.77

* fix clippy errors

* fixing clippy

* update to runtime 2200

* update proyect version to 0.22.0

* add missing migration

* removed the upgrade of versioning

* update cargo.lock

* using runtime

* fix  command

* fix compilation

* fix metadata test

* missing Config associated type

* modify MaxPageSize to the original till we need to increase it

* use latest release for zombienet tests

* go back to stable2407-3 release for zombienet tests

* go back to previous MaxPageSize

---------

Co-authored-by: Alessandro Siniscalchi <[email protected]>
Co-authored-by: luispdm <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T14-system_parachains This PR/Issue is related to system parachains.
Projects
Status: Audited
Development

Successfully merging this pull request may close these issues.

3 participants