Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add NodeFeatures field to HostConfiguration and runtime API #2177

Merged
merged 20 commits into from
Nov 14, 2023

Conversation

alindima
Copy link
Contributor

@alindima alindima commented Nov 6, 2023

Adds a NodeFeatures bitfield value to the runtime HostConfiguration, with the purpose of coordinating the enabling of node-side features, such as: #628 and #598.
These are features that require all validators enable them at the same time, assuming all/most nodes have upgraded their node versions.

This PR doesn't add any feature yet. These are coming in future PRs.

Also adds a runtime API for querying the state of the client features and an extrinsic for toggling a feature by its index in the bitfield.

Note: originally part of: #1644, but posted as standalone to be reused by other PRs until the initial PR is merged

@alindima alindima requested review from alexggh and sandreim November 6, 2023 14:52
@alindima alindima added the T4-runtime_API This PR/Issue is related to runtime APIs. label Nov 6, 2023
@alindima
Copy link
Contributor Author

alindima commented Nov 6, 2023

bot fmt

@alexggh
Copy link
Contributor

alexggh commented Nov 6, 2023

Open question: Note that this PR does NOT add a way of enabling any feature. Since this is a bitfield, I believe that an interface that would enable opengov/sudo to supply the entire u64 as input would be error-prone and a leaky abstraction. IMO it's best to add setters extrinsics on a per-feature basis. There are differing views: #1644 (comment)

Thank you @alindima for opening this, let me describe what I was thinking about.

My idea was that this should become is a generic way for us to enable different behaviours of the node(features) via a runtime API call, the way I would imagine the flow once we have such an API would be:

  1. I go in the node codebase and pick up an unused flag(bit).
  2. Code the behaviour in the node based on that flag.
  3. Release the node.
  4. Once we are ready to enable the feature someone creates a referendum that sets the flag(bit).
  5. Optional: I would leave the door open for disabling features as well via a referendum.

I think there are benefits of having such a flow implemented end-to-end, like:

  1. No need for every person to go and add a new api for each feature flag.
  2. No need to have to coordonate node and runtime releases for things that don't impact each other.

Of course, there are downsides as well, the most dangerous that I think you already spelled out is wrongly enabling/disabling the wrong bit/flag, but I'm hopping that gets seriously mitigated by the fact that it is a referendum after all and it should have several pairs of eyes looking at it or maybe we can find a way to make it user friendly.

@alindima alindima added the T8-polkadot This PR/Issue is related to/affects the Polkadot network. label Nov 6, 2023
@alindima
Copy link
Contributor Author

alindima commented Nov 6, 2023

My idea was that this should become is a generic way for us to enable different behaviours of the node(features) via a runtime API call, the way I would imagine the flow once we have such an API would be:

Just a small clarification: it's not a runtime API call, but an extrinsic. The runtime API for querying the client features will stay the same regardless of how many of them we add.

I see your point and agree that it would be nice to have this end-to-end on the client side, without a runtime upgrade. 👍🏻

It would be great if we could somehow couple the no-runtime-upgrade path with the nice, hard-to-misuse interface. From the looks of it, we have to pick one :(
Let's also see what others think

self.client_features.get(key).copied()
}

pub(crate) fn cache_client_features(&mut self, key: Hash, features: vstaging::ClientFeatures) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need those to be able to change on a per block basis. We should make ClientFeatures session buffered.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done. please have a look

@@ -261,6 +261,8 @@ pub struct HostConfiguration<BlockNumber> {
/// The minimum number of valid backing statements required to consider a parachain candidate
/// backable.
pub minimum_backing_votes: u32,
/// Client features enablement.
pub client_features: ClientFeatures,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In fact it is part of the HostConfiguration which is already session buffered. So caching by SessionIndex should work already.

@bkchr
Copy link
Member

bkchr commented Nov 6, 2023

I see your point and agree that it would be nice to have this end-to-end on the client side, without a runtime upgrade. 👍🏻

I'm also not entirely on board that we need this? This will then require random guessing if enough validators have upgraded to set this flag. Why isn't this done by bumping the networking protocol version and then let the validators discover on their own if they can use the new feature? Like all the nodes they would need to fetch the nodes of are supporting the new networking protocol version, so let's go. This will also make the separation more clean.

@alexggh
Copy link
Contributor

alexggh commented Nov 6, 2023

I see your point and agree that it would be nice to have this end-to-end on the client side, without a runtime upgrade. 👍🏻

I'm also not entirely on board that we need this? This will then require random guessing if enough validators have upgraded to set this flag. Why isn't this done by bumping the networking protocol version and then let the validators discover on their own if they can use the new feature? Like all the nodes they would need to fetch the nodes of are supporting the new networking protocol version, so let's go. This will also make the separation more clean.

Wouldn't this approach works just for situations where you can translate between the newer protocol and the older one, if you got cases where you can't do that, you need to decide somehow that from now on we are going to use the new logic/format. Through a referendum to enable a feature flag sounded like the correct way to random guess if enough validators have upgraded.

I guess you are suggesting to actually do it dynamically from the node, and have the node decide to use the new logic if they see that 90%(or any number) of the nodes have the new protocol version.

@bkchr
Copy link
Member

bkchr commented Nov 6, 2023

if you got cases where you can't do that

I mean we already had these cases and then we have bumped the ParachainHost version. Like for enabling async backing et all. And the node will need to support the old and new the new protocol any way until you pull the switch, so it could may also run both in parallel?

@alexggh
Copy link
Contributor

alexggh commented Nov 7, 2023

if you got cases where you can't do that

I mean we already had these cases and then we have bumped the ParachainHost version. Like for enabling async backing et all.

Yes, definitely we can do it with bumping the runtime version, I was hopping that we can build a specific explicit mechanism for enabling feature, so that we don't need to bump the runtime version. It works, but I think it overloads with meaning the runtime upgrades, we don't need any extra logic from the runtime, we just use it to signal that nodes should enable some new behaviour.

And the node will need to support the old and new the new protocol any way until you pull the switch, so it could may also run both in parallel?

At least for this case #628 they can't run in parallel, once we enable the creation of the new messages, older nodes won't be able the deserialise and check them. Also, newer nodes can't translate the message to the old message because of the way signatures are generated and checked.

Anyways, it was just an idea of streamlining things even more than what this PR proposes, you guys convinced me it is a bad idea :D.

@alindima
Copy link
Contributor Author

alindima commented Nov 7, 2023

I mean we already had these cases and then we have bumped the ParachainHost version. Like for enabling async backing et all. And the node will need to support the old and new the new protocol any way until you pull the switch, so it could may also run both in parallel?

It's also the case for #598 that it can't run with both the feature being disabled on some nodes and enabled on others at the same time. It has to be an atomic switch. The node-side code is able handle both cases, but not both at the same time.

The systematic availability recovery feature is not tied to a networking protocol upgrade neccessarily and also does not have some configuration params that need to be queried from the runtime (unlike async backing).

Bumping the runtime API version like we did for async backing works, and this PR mimics just that, but in a way that can be (at least partially) reused by other features in the future.

@bkchr
Copy link
Member

bkchr commented Nov 7, 2023

But do we really need a secondary versioning scheme? Or isn't the runtime api version good enough for this?

@alindima
Copy link
Contributor Author

alindima commented Nov 7, 2023

But do we really need a secondary versioning scheme? Or isn't the runtime api version good enough for this?

If by versioning scheme you mean the fact that we have an additional bitfield of features stored in the runtime, it'll help with:

  • being able to disable/reenable the feature after it went live (if a reason/bug comes up). Using only the runtime API version would require a runtime upgrade when disabling a feature. Having a value stored in the HostConfiguration would just require a referendum/sudo call.
  • having a way of using the runtime API for querying the status of a feature even if the feature does not need a specific runtime API to be available in order to function. This is not the case for async backing because it already requires some params to be stored in the runtime. But for features like systematic recovery, there isn't such a parameter. The alternative would be to have a runtime API like: is_feature_X_enabled which just returns a bool. Conceptually, this is what this PR enables, while also adding some genericitiy so that we won't have a bunch of is_feature_X_enabled runtime APIs in the future.

@alindima
Copy link
Contributor Author

alindima commented Nov 8, 2023

But do we really need a secondary versioning scheme? Or isn't the runtime api version good enough for this?

If by versioning scheme you mean the fact that we have an additional bitfield of features stored in the runtime, it'll help with:

  • being able to disable/reenable the feature after it went live (if a reason/bug comes up). Using only the runtime API version would require a runtime upgrade when disabling a feature. Having a value stored in the HostConfiguration would just require a referendum/sudo call.
  • having a way of using the runtime API for querying the status of a feature even if the feature does not need a specific runtime API to be available in order to function. This is not the case for async backing because it already requires some params to be stored in the runtime. But for features like systematic recovery, there isn't such a parameter. The alternative would be to have a runtime API like: is_feature_X_enabled which just returns a bool. Conceptually, this is what this PR enables, while also adding some genericitiy so that we won't have a bunch of is_feature_X_enabled runtime APIs in the future.

@bkchr, taking this into account, does the PR seem reasonable?

@bkchr
Copy link
Member

bkchr commented Nov 8, 2023

I'm not totally sold, however I don't want to block this.

However, I would request the following changes:

  • Please name it NodeFeatures.
  • Please make NodeFeatures a BitVec. This enables us to add as many features as we like :P

@alindima
Copy link
Contributor Author

alindima commented Nov 9, 2023

Please name it NodeFeatures.

will do

Please make NodeFeatures a BitVec. This enables us to add as many features as we like :P

It currently uses bitflags, which should do the same thing, with the addition of having named bits. Still, I'll switch to BitVec as you suggest, because it's more widespread in our codebase.

@bkchr
Copy link
Member

bkchr commented Nov 9, 2023

It currently uses bitflags, which should do the same thing, with the addition of having named bits. Still, I'll switch to BitVec as you suggest, because it's more widespread in our codebase.

I would just make the type returned by the runtime a BitVec. Internally using a bitflag for having named bits makes sense or we just create some statics for this. Best to check out what the easiest is with the BitVec api.

@alindima alindima changed the title add ClientFeatures field to HostConfiguration and runtime API add NodeFeatures field to HostConfiguration and runtime API Nov 9, 2023
Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, but I think it is missing that safe interface for flipping individual bits. If we'll rely on the introduction of per client feature setterst then we'd do one extra runtime upgrade for what could have been just a single configuration set call.

also switch to using a u64 in the runtime for storing the features.
If we were to use a bitvec here as well, the toggle_node_feature extrinsic
could potentially allocate indefinite memory.
command-bot added 2 commits November 10, 2023 11:31
@alindima
Copy link
Contributor Author

I added a set_node_feature(index, bool) extrinsic for enabling/disabling a feature and the benchmark for it.
PR is ready now

Copy link
Contributor

@sandreim sandreim left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏼

I think it needs one final fix on the extrisinc/api/config naming as suggested.

@@ -261,6 +261,8 @@ pub struct HostConfiguration<BlockNumber> {
/// The minimum number of valid backing statements required to consider a parachain candidate
/// backable.
pub minimum_backing_votes: u32,
/// Node features enablement.
pub node_features: NodeFeatures,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a better name is required_client_capabilities.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this name is strictly better. I don't have a strong opinion but we've just decided on node_features. WDYT @bkchr @alexggh

T::WeightInfo::set_node_feature(),
DispatchClass::Operational
))]
pub fn set_node_feature(origin: OriginFor<T>, index: u8, value: bool) -> DispatchResult {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pub fn set_node_feature(origin: OriginFor<T>, index: u8, value: bool) -> DispatchResult {
pub fn set_required_client_capability(origin: OriginFor<T>, index: u8, value: bool) -> DispatchResult {

.await,
)
.await;

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On session changes it would be a good idea to emit an warning to upgrade the node if not all bits make sense(at least a required feature we don't have)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's doable if we want to be able to add new features without a runtime upgrade.
The runtime does not have any idea of what the feature bits mean. They only make sense on the node-side.

If we want to emit warnings for new features that are added, they have to come via a runtime upgrade (which would defeat the purpose of what I've done with this PR so far in order to have seamless feature addition/enabling). I think it's best to leave it as it is

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The runtime does not have any idea of what the feature bits mean. They only make sense on the node-side.

Yes exactly, those are bits that only the client understands. This is just to inform the operator, that currently a client requirement is not fulfilled and a node upgrade is required.

If we want to emit warnings for new features that are added, they have to come via a runtime upgrade (which would defeat the purpose of what I've done with this PR so far in order to have seamless feature addition/enabling). I think it's best to leave it as it is

No runtime upgrade is needed as the client will see a new 1 bit when the feature gets enabled. Even if it doesn't know what it means it can still print the warning saying to upgrade the node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, I didn't fully get this from the beginning.
I can make the RuntimeApiSubsystem track the last session and when seeing a new session check the highest bit set returned from the node_features runtime API. If that's higher than some LAST_NODE_FEATURE constant on the node side, issue a warning.

As spoken on element, I'll add this on the next PR that uses this runtime API, as it'd be redundant right now.
Good suggestion 👍🏻

@alindima
Copy link
Contributor Author

bot clean

@paritytech-cicd-pr
Copy link

The CI pipeline was cancelled due to failure one of the required jobs.
Job name: cargo-clippy
Logs: https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4330326

@alindima alindima merged commit fc12f43 into master Nov 14, 2023
111 of 113 checks passed
@alindima alindima deleted the alindima/add-client-features-to-runtime branch November 14, 2023 18:48
ordian added a commit that referenced this pull request Nov 22, 2023
* tsv-disabling: (46 commits)
  frame-system: Add `last_runtime_upgrade_spec_version` (#2351)
  [testnet] Remove Wococo stuff from BridgeHubRococo/AssetHubRococo (#2300)
  xcm: SovereignPaidRemoteExporter: remove unused RefundSurplus instruction (#2312)
  Add `collectives-westend` and `glutton-westend` runtimes (#2024)
  Identity Deposits Relay to Parachain Migration (#1814)
  [CI] Prepare CI for Merge Queues (#2308)
  Unify `ChainSync` actions under one enum (follow-up) (#2317)
  pallet-xcm: use XcmTeleportFilter for teleported fees in reserve transfers (#2322)
  Contracts expose pallet-xcm (#1248)
  add NodeFeatures field to HostConfiguration and runtime API (#2177)
  statement-distribution: support inactive local validator in grid (#1571)
  change prepare worker to use fork instead of threads (#1685)
  chainHead/tests: Fix clippy (#2325)
  Contracts: Bump contracts rococo (#2286)
  Add simple collator election mechanism (#1340)
  chainHead: Remove `chainHead_genesis` method (#2296)
  chainHead: Support multiple hashes for `chainHead_unpin` method (#2295)
  Add environment to claim workflow (#2318)
  PVF: fix detection of unshare-and-change-root security capability (#2304)
  xcm-emulator: add Rococo<>Westend bridge and add tests for assets transfers over the bridge (#2251)
  ...
claravanstaden added a commit to Snowfork/polkadot-sdk that referenced this pull request Nov 24, 2023
* fix substrate-node-template generation (#2050)

# Description

This PR updates the node-template-release generation binary as well as
the `node-template-release.sh` file so that we can automatically push
updates to the [substrate-node-template
repository](https://github.com/substrate-developer-hub/substrate-node-template).
I assume this part was not updated after the substrate project has been
moved into the polkadot-sdk mono repo.

# Adjustments
- extend the `node-template-release.sh` to support the substrate
child-folder
- update the `SUBSTRATE_GIT_URL`
- fix the Cargo.toml filter (so that it does not include any
non-relevant .toml files)
- set the workspace-edition to 2021

# Note
In order to auto-generate the artifacts [this
line](https://github.com/paritytech/polkadot-sdk/blob/master/.gitlab/pipeline/build.yml#L320C15-L320C15)
needs to be included in the build.yml script again. Since I do not have
access to the (probably) internal gitlab environment I hope that someone
with actual access can introduce that change.
I also do not know how the auto-publish feature works so that would be
another thing to add later on.

---------

Co-authored-by: Bastian Köcher <[email protected]>

* Build workers for testing on demand (#2018)

* [FRAME] Short-circuit fungible self transfer (#2118)

Changes:
- Change the fungible(s) logic to treat a self-transfer as No-OP (as
long as all pre-checks pass).

Note that the self-transfer case will not emit an event since no state
was changed.

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* [testnet] Add `AssetHubRococo` <-> `AssetHubWestend` asset bridging support (#1967)

## Summary

Asset bridging support for AssetHub**Rococo** <-> AssetHub**Wococo** was
added [here](https://github.com/paritytech/polkadot-sdk/pull/1215), so
now we aim to bridge AssetHub**Rococo** and AssetHub**Westend**. (And
perhaps retire AssetHubWococo and the Wococo chains).

## Solution

**bridge-hub-westend-runtime**
- added new runtime as a copy of `bridge-hub-rococo-runtime`
- added support for bridging to `BridgeHubRococo`
- added tests and benchmarks

**bridge-hub-rococo-runtime**
- added support for bridging to `BridgeHubWestend`
- added tests and benchmarks
- internal refactoring by splitting bridge configuration per network,
e.g., `bridge_to_whatevernetwork_config.rs`.

**asset-hub-rococo-runtime**
- added support for asset bridging to `AssetHubWestend` (allows to
receive only WNDs)
- added new xcm router for `Westend`
- added tests and benchmarks

**asset-hub-westend-runtime**
- added support for asset bridging to `AssetHubRococo` (allows to
receive only ROCs)
- added new xcm router for `Rococo`
- added tests and benchmarks

## Deployment

All changes will be deployed as a part of
https://github.com/paritytech/polkadot-sdk/issues/1988.

## TODO

- [x] benchmarks for all pallet instances
- [x] integration tests
- [x] local run scripts


Relates to:
https://github.com/paritytech/parity-bridges-common/issues/2602
Relates to: https://github.com/paritytech/polkadot-sdk/issues/1988

---------

Co-authored-by: command-bot <>
Co-authored-by: Adrian Catangiu <[email protected]>
Co-authored-by: joe petrowski <[email protected]>

* Fix for failed pipeline `test-doc` (#2127)

Fix for failed pipeline `test-doc`:
https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4174859
I just wonder how could have other PR been merged after this was merged:
https://github.com/paritytech/polkadot-sdk/pull/1714/files#diff-1bde7bb2be0165cbe6db391e10a4a0b2f333348681373a86a0f8502d14d20d32R56

* Bandersnatch dependency update (#2114)

Closes https://github.com/paritytech/polkadot-sdk/issues/2013

* Added `bridge-hub-westend-runtime` to the `short-benchmarks` pipeline (#2128)

* Make `ExecResult` encodable (#1809)

# Description
We derive few useful traits on `ErrorOrigin` and `ExecError`, including
`codec::Encode` and `codec::Decode`, so that `ExecResult` is
en/decodable as well. This is required for a contract mocking feature
(already prepared in drink:
https://github.com/Cardinal-Cryptography/drink/pull/61). In more detail:
`ExecResult` must be passed from runtime extension, through runtime
interface, back to the pallet, which requires that it is serializable to
bytes in some form (or implements some rare, auxiliary traits).

**Impact on runtime size**: Since most of these traits is used directly
in the pallet now, compiler should be able to throw it out (and thus we
bring no new overhead). However, they are very useful in secondary tools
like drink or other testing libraries.

# Checklist

- [x] My PR includes a detailed description as outlined in the
"Description" section above
- [ ] My PR follows the [labeling requirements](CONTRIBUTING.md#Process)
of this project (at minimum one label for `T`
  required)
- [x] I have made corresponding changes to the documentation (if
applicable)
- [x] I have added tests that prove my fix is effective or that my
feature works (if applicable)

* impl Clone for `MemoryKeystore` (#2131)

* XCM MultiAssets: sort after reanchoring (#2129)

Fixes https://github.com/paritytech/polkadot-sdk/issues/2123

* Use `Message Queue` as DMP and XCMP dispatch queue (#1246)

(imported from https://github.com/paritytech/cumulus/pull/2157)

## Changes

This MR refactores the XCMP, Parachains System and DMP pallets to use
the [MessageQueue](https://github.com/paritytech/substrate/pull/12485)
for delayed execution of incoming messages. The DMP pallet is entirely
replaced by the MQ and thereby removed. This allows for PoV-bounded
execution and resolves a number of issues that stem from the current
work-around.

All System Parachains adopt this change.  
The most important changes are in `primitives/core/src/lib.rs`,
`parachains/common/src/process_xcm_message.rs`,
`pallets/parachain-system/src/lib.rs`, `pallets/xcmp-queue/src/lib.rs`
and the runtime configs.

### DMP Queue Pallet

The pallet got removed and its logic refactored into parachain-system.
Overweight message management can be done directly through the MQ
pallet.

Final undeployment migrations are provided by
`cumulus_pallet_dmp_queue::UndeployDmpQueue` and `DeleteDmpQueue` that
can be configured with an aux config trait like:

```rust
parameter_types! {
	pub const DmpQueuePalletName: &'static str = \"DmpQueue\" < CHANGE ME;
	pub const RelayOrigin: AggregateMessageOrigin = AggregateMessageOrigin::Parent;
}

impl cumulus_pallet_dmp_queue::MigrationConfig for Runtime {
	type PalletName = DmpQueuePalletName;
	type DmpHandler = frame_support::traits::EnqueueWithOrigin<MessageQueue, RelayOrigin>;
	type DbWeight = <Runtime as frame_system::Config>::DbWeight;
}

// And adding them to your Migrations tuple:
pub type Migrations = (
	...
	cumulus_pallet_dmp_queue::UndeployDmpQueue<Runtime>,
	cumulus_pallet_dmp_queue::DeleteDmpQueue<Runtime>,
);
```

### XCMP Queue pallet

Removed all dispatch queue functionality. Incoming XCMP messages are now
either: Immediately handled if they are Signals, enqueued into the MQ
pallet otherwise.

New config items for the XCMP queue pallet:
```rust
/// The actual queue implementation that retains the messages for later processing.
type XcmpQueue: EnqueueMessage<ParaId>;

/// How a XCM over HRMP from a sibling parachain should be processed.
type XcmpProcessor: ProcessMessage<Origin = ParaId>;

/// The maximal number of suspended XCMP channels at the same time.
#[pallet::constant]
type MaxInboundSuspended: Get<u32>;
```

How to configure those:

```rust
// Use the MessageQueue pallet to store messages for later processing. The `TransformOrigin` is needed since
// the MQ pallet itself operators on `AggregateMessageOrigin` but we want to enqueue `ParaId`s.
type XcmpQueue = TransformOrigin<MessageQueue, AggregateMessageOrigin, ParaId, ParaIdToSibling>;

// Process XCMP messages from siblings. This is type-safe to only accept `ParaId`s. They will be dispatched
// with origin `Junction::Sibling(…)`.
type XcmpProcessor = ProcessFromSibling<
	ProcessXcmMessage<
		AggregateMessageOrigin,
		xcm_executor::XcmExecutor<xcm_config::XcmConfig>,
		RuntimeCall,
	>,
>;

// Not really important what to choose here. Just something larger than the maximal number of channels.
type MaxInboundSuspended = sp_core::ConstU32<1_000>;
```

The `InboundXcmpStatus` storage item was replaced by
`InboundXcmpSuspended` since it now only tracks inbound queue suspension
and no message indices anymore.

Now only sends the most recent channel `Signals`, as all prio ones are
out-dated anyway.

### Parachain System pallet

For `DMP` messages instead of forwarding them to the `DMP` pallet, it
now pushes them to the configured `DmpQueue`. The message processing
which was triggered in `set_validation_data` is now being done by the MQ
pallet `on_initialize`.

XCMP messages are still handed off to the `XcmpMessageHandler`
(XCMP-Queue pallet) - no change here.

New config items for the parachain system pallet:
```rust
/// Queues inbound downward messages for delayed processing. 
///
/// Analogous to the `XcmpQueue` of the XCMP queue pallet.
type DmpQueue: EnqueueMessage<AggregateMessageOrigin>;
``` 

How to configure:
```rust
/// Use the MQ pallet to store DMP messages for delayed processing.
type DmpQueue = MessageQueue;
``` 

## Message Flow

The flow of messages on the parachain side. Messages come in from the
left via the `Validation Data` and finally end up at the `Xcm Executor`
on the right.

![Untitled
(1)](https://github.com/paritytech/cumulus/assets/10380170/6cf8b377-88c9-4aed-96df-baace266e04d)

## Further changes

- Bumped the default suspension, drop and resume thresholds in
`QueueConfigData::default()`.
- `XcmpQueue::{suspend_xcm_execution, resume_xcm_execution}` errors when
they would be a noop.
- Properly validate the `QueueConfigData` before setting it.
- Marked weight files as auto-generated so they wont auto-expand in the
MR files view.
- Move the `hypothetical` asserts to `frame_support` under the name
`experimental_hypothetically`

Questions:
- [ ] What about the ugly `#[cfg(feature = \"runtime-benchmarks\")]` in
the runtimes? Not sure how to best fix. Just having them like this makes
tests fail that rely on the real message processor when the feature is
enabled.
- [ ] Need a good weight for `MessageQueueServiceWeight`. The scheduler
already takes 80% so I put it to 10% but that is quite low.

TODO:
- [x] Remove c&p code after
https://github.com/paritytech/polkadot/pull/6271
- [x] Use `HandleMessage` once it is public in Substrate
- [x] fix `runtime-benchmarks` feature
https://github.com/paritytech/polkadot/pull/6966
- [x] Benchmarks
- [x] Tests
- [ ] Migrate `InboundXcmpStatus` to `InboundXcmpSuspended`
- [x] Possibly cleanup Migrations (DMP+XCMP)
- [x] optional: create `TransformProcessMessageOrigin` in Substrate and
replace `ProcessFromSibling`
- [ ] Rerun weights on ref HW

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Liam Aharon <[email protected]>
Co-authored-by: joe petrowski <[email protected]>
Co-authored-by: Kian Paimani <[email protected]>
Co-authored-by: command-bot <>

* Create new trait for non-dedup storage decode (#1932)

- This adds the new trait `StorageDecodeNonDedupLength` and implements
them for `BTreeSet` and its bounded types.
- New unit test has been added to cover the case.  
- See linked
[issue](https://github.com/paritytech/polkadot-sdk/issues/126) which
outlines the original issue.

Note that the added trait here doesn't add new logic but improves
semantics.

---------

Co-authored-by: joe petrowski <[email protected]>
Co-authored-by: Kian Paimani <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: command-bot <>

* [testnet] Allow governance to control fees for Rococo <> Westend bridge (#2139)

Right now governance could only control byte-fee component of Rococo <>
Westend message fees (paid at Asset Hubs). This PR changes it a bit:
1) governance now allowed to control both fee components - byte fee and
base fee;
2) base fee now includes cost of "default" delivery and confirmation
transactions, in addition to `ExportMessage` instruction cost.

* skip trigger for review bot on draft PRs (#2145)

Added if condition on review-bot's trigger so it does not trigger in
`draft` PRs.

* substrate: sysinfo: Expose failed hardware requirements (#2144)

The check_hardware functions does not give us too much information as to
what is failing, so let's return the list of failed metrics, so that callers can print 
it.

This would make debugging easier, rather than try to guess which
dimension is actually failing.

Signed-off-by: Alexandru Gheorghe <[email protected]>

* Update Kusama Parachains Bootnode  (#2148)

# Description

Update the bootnode of kusama parachains before decommissioning the
nodes. This will avoid connecting to non-existing bootnodes.

* Do not request blocks below the common number when syncing (#2045)

This changes `BlockCollection` logic so we don't download block ranges
from peers with which we have these ranges already in sync.

Improves situation with
https://github.com/paritytech/polkadot-sdk/issues/1915.

* Convert `SyncingEngine::run` to use `tokio::select!` instead of polling (#2132)

* Add deprecation checklist document for Substrate (#1583)

fixes https://github.com/paritytech/polkadot-sdk/issues/182

This PR adds a document with recommendations of how deprecations should
be handled. Initiated within FRAME, this checklist could be extended to
the rest of the repo.

I want to quote here a comment from @kianenigma that summarizes the
spirit of this new document:
> I would see it as a guideline of "what an extensive deprecation
process looks like". As the author of a PR, you should match this
against your "common sense" and see if it is needed or not. Someone else
can nudge you to "hey, this is an important PR, you should go through
the deprecation process".
> 
> For some trivial things, all the steps might be an overkill.

---------

Co-authored-by: Francisco Aguirre <[email protected]>

* Tracking/limiting memory allocator (#1192)

* `sc-block-builder`: Remove `BlockBuilderProvider` (#2099)

The `BlockBuilderProvider` was a trait that was defined in
`sc-block-builder`. The trait was implemented for `Client`. This
basically meant that you needed to import `sc-block-builder` any way to
have access to the block builder. So, this trait was not providing any
real value. This pull request is removing the said trait. Instead of the
trait it introduces a builder for creating a `BlockBuilder`. The builder
currently has the quite fabulous name `BlockBuilderBuilder` (I'm open to
any better name :sweat_smile:). The rest of the pull request is about
replacing the old trait with the new builder.

# Downstream code changes

If you used `new_block` or `new_block_at` before you now need to switch
it over to the new `BlockBuilderBuilder` pattern:

```rust
// `new` requires a type that implements `CallApiAt`. 
let mut block_builder = BlockBuilderBuilder::new(client)
                // Then you need to specify the hash of the parent block the block will be build on top of
		.on_parent_block(at)
                // The block builder also needs the block number of the parent block. 
                // Here it is fetched from the given `client` using the `HeaderBackend`
                // However, there also exists `with_parent_block_number` for directly passing the number
		.fetch_parent_block_number(client)
		.unwrap()
                // Enable proof recording if required. This call is optional.
		.enable_proof_recording()
                // Pass the digests. This call is optional.
                .with_inherent_digests(digests)
		.build()
		.expect("Creates new block builder");
```

---------

Co-authored-by: Sebastian Kunert <[email protected]>
Co-authored-by: command-bot <>

* Identity pallet improvements (#2048)

This PR is a follow up to #1661 

- [x] rename the `simple` module to `legacy`
- [x] fix benchmarks to disregard the number of additional fields
- [x] change the storage deposits to charge per encoded byte of the
identity information instance, removing the need for `fn
additional(&self) -> usize` in `IdentityInformationProvider`
- [x] ~add an extrinsic to rejig deposits to account for the change
above~
- [ ] ~ensure through proper configuration that the new byte-based
deposit is always lower than whatever is reserved now~
- [x] remove `IdentityFields` from the `set_fields` extrinsic signature,
as per [this
discussion](https://github.com/paritytech/polkadot-sdk/pull/1661#discussion_r1371703403)

> ensure through proper configuration that the new byte-based deposit is
always lower than whatever is reserved now

Not sure this is needed anymore. If the new deposits are higher than
what is currently on chain and users don't have enough funds to reserve
what is needed, the extrinisc fails and they're basically grandfathered
and frozen until they add more funds and/or make a change to their
identity. This behavior seems fine to me. Original idea
[here](https://github.com/paritytech/polkadot-sdk/pull/1661#issuecomment-1779606319).

> add an extrinsic to rejig deposits to account for the change above

This was initially implemented but now removed from this PR in favor of
the implementation detailed
[here](https://github.com/paritytech/polkadot-sdk/pull/2088).

---------

Signed-off-by: georgepisaltu <[email protected]>
Co-authored-by: joepetrowski <[email protected]>

* cumulus test runtime: remove `GenesisExt` (#2147)

This PR removes the `GenesisExt` wrapper over the `GenesisRuntimeConfig`
in `cumulus-test-service`. Initialization of values that were performed
by `GenesisExt::BuildStorage` was moved into `test_pallet` genesis.

---------

Co-authored-by: command-bot <>
Co-authored-by: Bastian Köcher <[email protected]>

* Speed up try runtime checks for pallet-bags-list (#2151)

closes https://github.com/paritytech/polkadot-sdk/issues/2020.

This improves running time for pallet-bags-list try runtime checks on
westend from ~90 minutes to 6 seconds on M2 pro.

* Speed up nominator state checks in staking pallet (#2153)

Should help https://github.com/paritytech/polkadot-sdk/issues/234.
Related to https://github.com/paritytech/polkadot-sdk/issues/2020 and
https://github.com/paritytech/polkadot-sdk/issues/2108.

Refactors and improves running time for try runtime checks for staking
pallet.

Tested on westend on my M2 pro: running time drops from 90 seconds to 7
seconds.

* Update bootnode lists (#2150)

# Description

Update the bootnode of kusama parachains before decommissioning the
nodes. This will avoid connecting to non-existing bootnodes.

* Tracking allocator: mark `Spinlock::unlock()` as unsafe and provide a safety contract (#2156)

* `chain-spec`: getting ready for native-runtime-free world (#1256)

This PR prepares chains specs for _native-runtime-free_  world.

This PR has following changes:
- `substrate`:
  - adds support for:
- JSON based `GenesisConfig` to `ChainSpec` allowing interaction with
runtime `GenesisBuilder` API.
- interacting with arbitrary runtime wasm blob to[
`chain-spec-builder`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/bin/utils/chain-spec-builder/src/lib.rs#L46)
command line util,
- removes
[`code`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/frame/system/src/lib.rs#L660)
from `system_pallet`
  - adds `code` to the `ChainSpec`
- deprecates
[`ChainSpec::from_genesis`](https://github.com/paritytech/substrate/blob/3ef576eaeb3f42610e85daecc464961cf1295570/client/chain-spec/src/chain_spec.rs#L263),
but also changes the signature of this method extending it with `code`
argument.
[`ChainSpec::builder()`](https://github.com/paritytech/substrate/blob/20bee680ed098be7239cf7a6b804cd4de267983e/client/chain-spec/src/chain_spec.rs#L507)
should be used instead.
- `polkadot`:
- all references to `RuntimeGenesisConfig` in `node/service` are
removed,
- all
`(kusama|polkadot|versi|rococo|wococo)_(staging|dev)_genesis_config`
functions now return the JSON patch for default runtime `GenesisConfig`,
  - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,

- `cumulus`:
  - `ChainSpecBuilder` is used, `ChainSpec::from_genesis` is removed,
- _JSON_ patch configuration used instead of `RuntimeGenesisConfig
struct` in all chain specs.
  
---------

Co-authored-by: command-bot <>
Co-authored-by: Javier Viola <[email protected]>
Co-authored-by: Davide Galassi <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: Kevin Krone <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* Fix update-ui-tests.sh (#2161)

Related https://github.com/paritytech/polkadot-sdk/issues/2013

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* [CI] Update deps (#2159)

Otherwise the return code is not correctly propagated (ref
https://github.com/ggwpez/zepter/pull/48).

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>

* Get rid of `NetworkService` in `ChainSync` (#2143)

Move peer banning from `ChainSync` to `SyncingEngine`.

* `serde_json`: bumped to 1.0.108 (#2168)

This PR updates the version of `serde_json` to `1.0.108` throughout the
codebase.

* Add warning when peer_id is not available when building topology (#2140)

... see https://github.com/paritytech/polkadot-sdk/issues/2138 for why
is not good, until we fix it let's add a warning to understand if this
is happening in the wild.

---------

Signed-off-by: Alexandru Gheorghe <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* Add force remove vesting (#1982)

This PR exposes a `force_remove_vesting` through a ROOT call. 
See linked
[issue](https://github.com/paritytech/polkadot-sdk/issues/269)

---------

Co-authored-by: georgepisaltu <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Dónal Murray <[email protected]>

* rename benchmark (#2173)

A quick fix where a benchmark test was wrongly renamed in this PR
https://github.com/paritytech/polkadot-sdk/pull/1868

* approval-voting improvement: include all tranche0 assignments in one certificate  (#1178)

**_PR migrated from https://github.com/paritytech/polkadot/pull/6782_** 

This PR will upgrade the network protocol to version 3 -> VStaging which
will later be renamed to V3. This version introduces a new kind of
assignment certificate that will be used for tranche0 assignments.
Instead of issuing/importing one tranche0 assignment per candidate,
there will be just one certificate per relay chain block per validator.
However, we will not be sending out the new assignment certificates,
yet. So everything should work exactly as before. Once the majority of
the validators have been upgraded to the new protocol version we will
enable the new certificates (starting at a specific relay chain block)
with a new client update.

There are still a few things that need to be done:

- [x] Use bitfield instead of Vec<CandidateIndex>:
https://github.com/paritytech/polkadot/pull/6802
  - [x] Fix existing approval-distribution and approval-voting tests
  - [x] Fix bitfield-distribution and statement-distribution tests
  - [x] Fix network bridge tests
  - [x] Implement todos in the code
  - [x] Add tests to cover new code
  - [x] Update metrics
  - [x] Remove the approval distribution aggression levels: TBD PR
  - [x] Parachains DB migration 
  - [x] Test network protocol upgrade on Versi
  - [x] Versi Load test
  - [x] Add Zombienet test
  - [x] Documentation updates
- [x] Fix for sending DistributeAssignment for each candidate claimed by
a v2 assignment (warning: Importing locally an already known assignment)
 - [x]  Fix AcceptedDuplicate
 - [x] Fix DB migration so that we can still keep old data.
 - [x] Final Versi burn in

---------

Signed-off-by: Andrei Sandu <[email protected]>
Signed-off-by: Alexandru Gheorghe <[email protected]>
Co-authored-by: Alexandru Gheorghe <[email protected]>

* minor: overseer availability-distribution message declaration update (#2179)

availability-distribution subsystem is not sending availability-recovery
messages. Update the overseer declaration to reflect this

* TryDecodeEntireState check for storage types and pallets (#1805)

### This PR is a port of this [PR for
substrate](https://github.com/paritytech/substrate/pull/13013) by
@kianenigma

Add infrastructure needed to have a Pallet::decode_entire_state(), which
makes sure all "typed" storage items defined in the pallet are
decode-able.

This is not enforced in any way at the moment. Teams who wish to
integrate/use this in the try-runtime feature flag should add
frame_support::storage::migration::EnsureStateDecodes as the LAST ITEM
of the runtime's custom migrations, and pass it to frame-executive. This
will make it usable in try-runtime on-runtime-upgrade.

This now catches cases like
https://github.com/paritytech/polkadot-sdk/pull/1969:
```pre
ERROR runtime::executive] failed to decode the value at key: Failed to decode value at key: 0x94eadf0156a8ad5156507773d0471e4ab8ebad86f546c7e0b135a4212aace339. Storage info StorageInfo { pallet_name: Ok("ParaScheduler"), storage_name: Ok("AvailabilityCores"), prefix: Err(Utf8Error { valid_up_to: 0, error_len: Some(1) }), max_values: Some(1), max_size: None }. Raw value: Some("0x0c010101010101")
```

... or:

![image](https://github.com/paritytech/polkadot-sdk/assets/10380170/73052d4f-4da5-4b21-a8dd-b17004e5965e)

Closes #241

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Liam Aharon <[email protected]>

* Initialise on-chain `StorageVersion` for pallets added after genesis (#1297)

Original PR https://github.com/paritytech/substrate/pull/14641

---

Closes https://github.com/paritytech/polkadot-sdk/issues/109

### Problem
Quoting from the above issue:

> When adding a pallet to chain after genesis we currently don't set the
StorageVersion. So, when calling on_chain_storage_version it returns 0
while the pallet is maybe already at storage version 9 when it was added
to the chain. This could lead to issues when running migrations.

### Solution

- Create a new trait `BeforeAllRuntimeMigrations` with a single method
`fn before_all_runtime_migrations() -> Weight` trait with a noop default
implementation
- Modify `Executive` to call
`BeforeAllRuntimeMigrations::before_all_runtime_migrations` for all
pallets before running any other hooks
- Implement `BeforeAllRuntimeMigrations` in the pallet proc macro to
initialize the on-chain version to the current pallet version if the
pallet has no storage set (indicating it has been recently added to the
runtime and needs to have its version initialised).

### Other changes in this PR

- Abstracted repeated boilerplate to access the `pallet_name` in the
pallet expand proc macro.

### FAQ

#### Why create a new hook instead of adding this logic to the pallet
`pre_upgrade`?

`Executive` currently runs `COnRuntimeUpgrade` (custom migrations)
before `AllPalletsWithSystem` migrations. We need versions to be
initialized before the `COnRuntimeUpgrade` migrations are run, because
`COnRuntimeUpgrade` migrations may use the on-chain version for critical
logic. e.g. `VersionedRuntimeUpgrade` uses it to decide whether or not
to execute.

We cannot reorder `COnRuntimeUpgrade` and `AllPalletsWithSystem` so
`AllPalletsWithSystem` runs first, because `AllPalletsWithSystem` have
some logic in their `post_upgrade` hooks to verify that the on-chain
version and current pallet version match. A common use case of
`COnRuntimeUpgrade` migrations is to perform a migration which will
result in the versions matching, so if they were reordered these
`post_upgrade` checks would fail.

#### Why init the on-chain version for pallets without a current storage
version?

We must init the on-chain version for pallets even if they don't have a
defined storage version so if there is a future version bump, the
on-chain version is not automatically set to that new version without a
proper migration.

e.g. bad scenario:

1. A pallet with no 'current version' is added to the runtime
2. Later, the pallet is upgraded with the 'current version' getting set
to 1 and a migration is added to Executive Migrations to migrate the
storage from 0 to 1
    a. Runtime upgrade occurs
    b. `before_all` hook initializes the on-chain version to 1
c. `on_runtime_upgrade` of the migration executes, and sees the on-chain
version is already 1 therefore think storage is already migrated and
does not execute the storage migration
Now, on-chain version is 1 but storage is still at version 0.

By always initializing the on-chain version when the pallet is added to
the runtime we avoid that scenario.

---------

Co-authored-by: Kian Paimani <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* zombienet_tests: Fix genesis error in 0006-parachains-max-tranche0.toml (#2191)

There was a race in merging between
https://github.com/paritytech/polkadot-sdk/pull/1256 and
https://github.com/paritytech/polkadot-sdk/pull/1178, so this newly
added tests wasn't updated with the new path for the configuration, so
fix that.

Signed-off-by: Alexandru Gheorghe <[email protected]>

* mark pallet-asset-rate optional in polkadot-runtime-common (#2187)

Part of #2186

The only usage of pallet-asset-rate is guarded by `runtime-benchmarks`
feature. I don't want ORML to be forced to include this pallet in deps
for no good reason.

* docs: fix typos (#2193)

* Fix "slashaed" typo (#2205)

# Description

This merely fixes a typo in the documentation, replacing the typo
"slashaed" with "slashed". Since external entities use the comments for
explanations of events, this will then be shown externally. I noticed
this when reviewing [this
event](https://polkadot.subscan.io/extrinsic/0xb6bc1e3abde0c2ed9c500c74cfc64cdb8179e5d9af97f4bf53242ce4cdd15a1d?event=18064194-6)
on Subscan.

This is not related to any other issues or PRs.

* Disable incoming light-client connections for minimal relay node (#2202)

When running with `--relay-chain-rpc-url` we received multiple reports
of high traffic that disappears when `--in-peers-light 0` is set. Indeed
it does not make much sense for light clients to connect to the minimal
node since it is not running the block announce protocol and the
request/response protocol for light clients.

This is intended to alleviate the traffic issues for now.

closes #1896
probably related https://github.com/paritytech/cumulus/issues/2563

* XCM builder pattern (#2107)

Added a proc macro to be able to write XCMs using the builder pattern.
This means we go from having to do this:

```rust
let message: Xcm<()> = Xcm(vec![
  WithdrawAsset(assets),
  BuyExecution { fees: asset, weight_limit: Unlimited },
  DepositAsset { assets, beneficiary },
]);
```

to this:

```rust
let message: Xcm<()> = Xcm::builder()
  .withdraw_asset(assets)
  .buy_execution(asset, Unlimited),
  .deposit_asset(assets, beneficiary)
  .build();
```

---------

Co-authored-by: Keith Yeung <[email protected]>
Co-authored-by: command-bot <>

* [testnets][xcm-emulator] add bridge-hub-westend and hook it up to emulator (#2204)

`bridge-hub-westend-runtime` was added to cumulus/parachains, but wasn't
hooked up to xcm-emulator to run tests against it.

This commit addresses that ^.

Signed-off-by: Adrian Catangiu <[email protected]>

* feat(frame-support-procedural): add `automaticaly_derived` attr to `NoBound` derives (#2197)

fixes #2196

* Add `sudo::remove_key` (#2165)

Changes:
- Adds a new call `remove_key` to the sudo pallet to permanently remove
the sudo key.
- Remove some clones and general maintenance

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: command-bot <>

* Adding gitspiegel-trigger workflow (#2135)

GitHub has a setting that requires manual click for executing GHA on the
branch, for the first-time contributors:
https://docs.github.com/en/actions/managing-workflow-runs/approving-workflow-runs-from-public-forks.

After this PR, gitspiegel will respect that setting. So, for PRs from
first-time contributors, gitspiegel won't do mirroring until the button
in PR is clicked. More info:
https://github.com/paritytech/gitspiegel/issues/169

* validate-block: Fix `TrieCache` implementation (#2214)

The trie cache implementation was ignoring the `storage_root` when
setting up the value cache. The problem with this is that the value
cache works using `storage_keys` and these keys are not unique across
different tries. A block can actually have different tries (main trie
and multiple child tries). This pull request fixes the issue by not
ignoring the `storage_root` and returning an unique `value_cache` per
`storage_root`. It also adds a test for the seen bug and improves
documentation that this doesn't happen again.

* Refactor candidate validation messages (#2219)

* [xcm-emulator] Chains generic over Network & Integration tests restructure (#2092)

Closes:
- #1383 
- Declared chains can be now be imported and reused in a different
crate.
- Chain declaration are now generic over a generic type `N` (the
Network)
- #1389
- Solved #1383, chains and networks declarations can be restructure to
avoid having to compile all chains when running integrations tests where
are not needed.
- Chains are now declared on its own crate (removed from
`integration-tests-common`)
- Networks are now declared on its own crate (removed from
`integration-tests-common`)
    - Integration tests will import only the relevant Network crate
- `integration-tests-common` is renamed to
`emulated-integration-tests-common`

All this is necessary to be able to implement what is described here:
https://github.com/paritytech/roadmap/issues/56#issuecomment-1777010553

---------

Co-authored-by: command-bot <>

* `sc-chain-spec`: add support for custom host functions (#2190)

Genesis building in runtime may involve calling some custom host
functions. This PR allows to pass `HostFunctions` into the `ChainSpec`
struct, which in turn are passed to `WasmExecutor`. The `ChainSpec` now
has extended host functions type parameter:
```
pub struct ChainSpec<G, E = NoExtension, EHF = ()>
```
which will be combined with the default set
(`sp_io::SubstrateHostFunctions`) in an instance of `WasmExecutor` used
to build the genesis config.

Fix for #2188

---------

Co-authored-by: Davide Galassi <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* integrations-test: `build_genesis_storage` name fix (#2232)

Some legacy tests were mistakenly merged in #1256 for `emulated-integration-tests-common` crate.
This PR fixes the function name `build_genesis_storage` (no need to use `legacy` suffix, even though the genesis is built from `RuntimeGenesisConfig`).

* Add prospective-parachain subsystem to minimal-relay-node + QoL improvements (#2223)

This PR contains some fixes and cleanups for parachain nodes:

1. When using async backing, node no longer complains about being unable
to reach the prospective-parachain subsystem.
2. Parachain warp sync now informs users that the finalized para block
has been retrieved.
```
2023-11-08 13:24:42 [Parachain] 🎉 Received finalized parachain header #5747719 (0xa0aa…674b) from the relay chain.
```
3. When a user supplied an invalid `--relay-chain-rpc-url`, we were
crashing with a very verbose message. Removed the `expect` and improved
the error message.
```
2023-11-08 13:57:56 [Parachain] No valid RPC url found. Stopping RPC worker.
2023-11-08 13:57:56 [Parachain] Essential task `relay-chain-rpc-worker` failed. Shutting down service.
Error: Service(Application(WorkerCommunicationError("RPC worker channel closed. This can hint and connectivity issues with the supplied RPC endpoints. Message: oneshot canceled")))
```

* Make PalletInfo fields public (#2231)

PalletInfo fields were private, preventing a user from actually using
the QueryPallet instruction in a meaningful way since they couldn't read
the received data.

* BridgeHub Runtimes: Change registration order of `MessageQueue` pallet (#2230)

This PR changes the registration order of the `MessageQueue` pallet so
that it is registered last.

This is necessary so that the
[on_initialize](https://github.com/Snowfork/snowbridge/blob/df8d5da82e517a65fb0858a4f2ead533290336b5/parachain/pallets/outbound-queue/src/lib.rs#L267)
hooks for Snowbridge can run before `MessageQueue` delivers messages
using its own `on_initialize`.

Generally, I think this is preferable regardless of Snowbridge's
particular requirements. Other pallets may want to do housekeeping
before MessageQueue starts delivering messages.

I'm hoping this PR, if accepted, can be included in the same release as
https://github.com/paritytech/polkadot-sdk/pull/1246. As otherwise,
changing the order of pallet registration is an ABI-breaking change.

* Rococo: Build two versions of the wasm binary (#2229)

One for local networks with `fast-runtime` feature activated (1 minute
sessions) and one without the feature activated that will be the default
that runs with 1 hour long sessions.

* Add RadiumBlock Bootnodes for parachains (#2224)

# Description

We would like to add our bootnodes to the following parachains:

Westend: Westmint, Bridgehub

Kusama: Statemine, Bridgehub

Polkadot: Statemint, Bridgehub, Collectives

Thank you.

---------

Co-authored-by: Oliver Tale-Yazdi <[email protected]>

* Remove unnecessary map_error (#2239)

This was discovered during a debugging session, and it only served to
mask the underlying error, which was not great.

* Add descriptions to all published crates (#2029)

Missing descriptions (47):  

- [x] `cumulus/client/collator/Cargo.toml`
- [x] `cumulus/client/relay-chain-inprocess-interface/Cargo.toml`
- [x] `cumulus/client/cli/Cargo.toml`
- [x] `cumulus/client/service/Cargo.toml`
- [x] `cumulus/client/relay-chain-rpc-interface/Cargo.toml`
- [x] `cumulus/client/relay-chain-interface/Cargo.toml`
- [x] `cumulus/client/relay-chain-minimal-node/Cargo.toml`
- [x] `cumulus/parachains/pallets/parachain-info/Cargo.toml`
- [x] `cumulus/parachains/pallets/ping/Cargo.toml`
- [x] `cumulus/primitives/utility/Cargo.toml`
- [x] `cumulus/primitives/aura/Cargo.toml`
- [x] `cumulus/primitives/core/Cargo.toml`
- [x] `cumulus/primitives/parachain-inherent/Cargo.toml`
- [x] `cumulus/test/relay-sproof-builder/Cargo.toml`
- [x] `cumulus/pallets/xcmp-queue/Cargo.toml`
- [x] `cumulus/pallets/dmp-queue/Cargo.toml`
- [x] `cumulus/pallets/xcm/Cargo.toml`
- [x] `polkadot/erasure-coding/Cargo.toml`
- [x] `polkadot/statement-table/Cargo.toml`
- [x] `polkadot/primitives/Cargo.toml`
- [x] `polkadot/rpc/Cargo.toml`
- [x] `polkadot/node/service/Cargo.toml`
- [x] `polkadot/node/core/parachains-inherent/Cargo.toml`
- [x] `polkadot/node/core/approval-voting/Cargo.toml`
- [x] `polkadot/node/core/dispute-coordinator/Cargo.toml`
- [x] `polkadot/node/core/av-store/Cargo.toml`
- [x] `polkadot/node/core/chain-api/Cargo.toml`
- [x] `polkadot/node/core/prospective-parachains/Cargo.toml`
- [x] `polkadot/node/core/backing/Cargo.toml`
- [x] `polkadot/node/core/provisioner/Cargo.toml`
- [x] `polkadot/node/core/runtime-api/Cargo.toml`
- [x] `polkadot/node/core/bitfield-signing/Cargo.toml`
- [x] `polkadot/node/network/dispute-distribution/Cargo.toml`
- [x] `polkadot/node/network/bridge/Cargo.toml`
- [x] `polkadot/node/network/collator-protocol/Cargo.toml`
- [x] `polkadot/node/network/approval-distribution/Cargo.toml`
- [x] `polkadot/node/network/availability-distribution/Cargo.toml`
- [x] `polkadot/node/network/bitfield-distribution/Cargo.toml`
- [x] `polkadot/node/network/gossip-support/Cargo.toml`
- [x] `polkadot/node/network/availability-recovery/Cargo.toml`
- [x] `polkadot/node/collation-generation/Cargo.toml`
- [x] `polkadot/node/overseer/Cargo.toml`
- [x] `polkadot/runtime/parachains/Cargo.toml`
- [x] `polkadot/runtime/common/slot_range_helper/Cargo.toml`
- [x] `polkadot/runtime/metrics/Cargo.toml`
- [x] `polkadot/xcm/pallet-xcm-benchmarks/Cargo.toml`
- [x] `polkadot/utils/generate-bags/Cargo.toml`
- [x]  `substrate/bin/minimal/runtime/Cargo.toml`

---------

Signed-off-by: Oliver Tale-Yazdi <[email protected]>
Signed-off-by: alindima <[email protected]>
Co-authored-by: ordian <[email protected]>
Co-authored-by: Tsvetomir Dimitrov <[email protected]>
Co-authored-by: Marcin S <[email protected]>
Co-authored-by: alindima <[email protected]>
Co-authored-by: Sebastian Kunert <[email protected]>
Co-authored-by: Dmitry Markin <[email protected]>
Co-authored-by: joe petrowski <[email protected]>
Co-authored-by: Liam Aharon <[email protected]>

* Add license to tracking-allocator and add staging-prefix (#2259)

* Don't publish frame and deps (#2260)

* sc-state-db: Keep track of `LAST_PRUNED` after warp syncing (#2228)

When warp syncing we import the target block with all its state.
However, we didn't store the `LAST_PRUNED` block which would then lead
to `pruning` to forget about the imported block after a restart of the
node. We just set `LAST_PRUNED` to the parent block of the warp sync
target block to fix this issue.

* Add license to tracking-allocator and add staging-prefix (#2261)

The staging- rename commit was missing from the last PR for some reason.

* Contracts move fixtures to new crate (#2246)

Small PR that introduce a new crate that will host RISC-V & wasm
fixtures for testing pallet-contracts

* [pallet-message-queue] Implement impl_trait_for_tuples for QueuePausedQuery (#2227)

These changes are required so that the bridgehub system runtimes can
more easily be configured with multiple message processors

Example usage:

```rust
use frame_support::traits::QueuePausedQuery;

impl pallet_message_queue::Config for Runtime {
    type QueuePausedQuery = (A, B, C)
}

* Improve `VersionedMigration` naming conventions (#2264)

As suggested by @ggwpez
(https://github.com/paritytech/polkadot-sdk/pull/2142#discussion_r1388145872),
remove the `VersionChecked` prefix from version checked migrations (but
leave `VersionUnchecked` prefixes)

---------

Co-authored-by: command-bot <>

* Contracts: Add XCM traits to interface with contracts (#2086)

We are introducing a new set of `XcmController` traits (final name yet
to be determined).
These traits are implemented by `pallet-xcm` and allows other pallets,
such as `pallet_contracts`, to rely on these traits instead of tight
coupling them to `pallet-xcm`.

Using only the existing Xcm traits would mean duplicating the logic from
`pallet-xcm` in these other pallets, which we aim to avoid. Our
objective is to ensure that when these APIs are called from
`pallet-contracts`, they produce the exact same outcomes as if called
directly from `pallet-xcm`.

The other benefits is that we can also expose return values to
`pallet-contracts` instead of just calling `pallet-xcm` dispatchable and
getting a `DispatchResult` back.

See traits integration in this PR
https://github.com/paritytech/polkadot-sdk/pull/1248, where the traits
are used as follow to define and implement `pallet-contracts` Config.
```rs
// Contracts config:
pub trait Config: frame_system::Config {
  // ...

  /// A type that exposes XCM APIs, allowing contracts to interact with other parachains, and
  /// execute XCM programs.
  type Xcm: xcm_executor::traits::Controller<
	  OriginFor<Self>,
	  <Self as frame_system::Config>::RuntimeCall,
	  BlockNumberFor<Self>,
  >;
}

// implementation
impl pallet_contracts::Config for Runtime {
        // ...

	type Xcm = pallet_xcm::Pallet<Self>;
}
```

---------

Co-authored-by: Alexander Theißen <[email protected]>
Co-authored-by: command-bot <>

* Add `s` utility function to frame support (#2275)

A utility function I consider quite useful to declare string literals
that are backed by an array.

---------

Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Davide Galassi <[email protected]>

* Unify `ChainSync` actions under one enum (#2180)

All `ChainSync` actions that `SyncingEngine` should perform are unified
under one `ChainSyncAction`. Processing of these actions put into a
single place after `select!` in `SyncingEngine::run` instead of multiple
places where calling `ChainSync` methods.

* PVF host: Make unavailable security features print a warning (#2244)

Co-authored-by: Bastian Köcher <[email protected]>

* wasm-builder: Optimize `rerun-if-changed` logic (#2282)

Optimizes the `rerun-if-changed` logic by ignoring `dev-dependencies`
and also not outputting paths. Because outputting paths could lead to
include unwanted crates in the rerun checks.

* pallet-grandpa: Remove `GRANDPA_AUTHORITIES_KEY` (#2181)

Remove the `GRANDPA_AUTHORITIES_KEY` key and its usage. Apparently this
was used in the early days to communicate the grandpa authorities to the
node. However, we have now a runtime api that does this for us. So, this
pull request is moving from the custom managed storage item to a FRAME
managed storage item.

This pr also includes a migration for doing the switch on a running
chain.

---------

Co-authored-by: Davide Galassi <[email protected]>

* Adds syntax for marking calls feeless (#1926)

Fixes https://github.com/paritytech/polkadot-sdk/issues/1725

This PR adds the following changes:
1. An attribute `pallet::feeless_if` that can be optionally attached to
a call like so:
```rust
#[pallet::feeless_if(|_origin: &OriginFor<T>, something: &u32| -> bool {
	*something == 0
})]
pub fn do_something(origin: OriginFor<T>, something: u32) -> DispatchResult {
     ....
}
```
The closure passed accepts references to arguments as specified in the
call fn. It returns a boolean that denotes the conditions required for
this call to be "feeless".

2. A signed extension `SkipCheckIfFeeless<T: SignedExtension>` that
wraps a transaction payment processor such as
`pallet_transaction_payment::ChargeTransactionPayment`. It checks for
all calls annotated with `pallet::feeless_if` to see if the conditions
are met. If so, the wrapped signed extension is not called, essentially
making the call feeless.

In order to use this, you can simply replace your existing signed
extension that manages transaction payment like so:
```diff
- pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
+ pallet_skip_feeless_payment::SkipCheckIfFeeless<
+	Runtime,
+	pallet_transaction_payment::ChargeTransactionPayment<Runtime>,
+ >,
```

### Todo
- [x] Tests
- [x] Docs
- [x] Prdoc

---------

Co-authored-by: Nikhil Gupta <>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: Liam Aharon <[email protected]>

* Skip zombienet CI job until PolkadotJS includes `SkipCheckIfFeeless` extension (#2294)

* pallet-xcm: enhance `reserve_transfer_assets` to support remote reserves (#1672)

## Motivation

`pallet-xcm` is the main user-facing interface for XCM functionality,
including assets manipulation functions like `teleportAssets()` and
`reserve_transfer_assets()` calls.

While `teleportAsset()` works both ways, `reserve_transfer_assets()`
works only for sending reserve-based assets to a remote destination and
beneficiary when the reserve is the _local chain_.

## Solution

This PR enhances `pallet_xcm::(limited_)reserve_withdraw_assets` to
support transfers when reserves are other chains.
This will allow complete, **bi-directional** reserve-based asset
transfers user stories using `pallet-xcm`.

Enables following scenarios:
- transferring assets with local reserve (was previously supported iff
asset used as fee also had local reserve - now it works in all cases),
- transferring assets with reserve on destination,
- transferring assets with reserve on remote/third-party chain (iff
assets and fees have same remote reserve),
- transferring assets with reserve different than the reserve of the
asset to be used as fees - meaning can be used to transfer random asset
with local/dest reserve while using DOT for fees on all involved chains,
even if DOT local/dest reserve doesn't match asset reserve,
- transferring assets with any type of local/dest reserve while using
fees which can be teleported between involved chains.

All of the above is done by pallet inner logic without the user having
to specify which scenario/reserves/teleports/etc. The correct scenario
and corresponding XCM programs are identified, and respectively, built
automatically based on runtime configuration of trusted teleporters and
trusted reserves.

#### Current limitations:
- while `fees` and "non-fee" `assets` CAN have different reserves (or
fees CAN be teleported), the remaining "non-fee" `assets` CANNOT, among
themselves, have different reserve locations (this is also implicitly
enforced by `MAX_ASSETS_FOR_TRANSFER=2`, but this can be safely
increased in the future).
- `fees` and "non-fee" `assets` CANNOT have **different remote**
reserves (this could also be supported in the future, but adds even more
complexity while possibly not being worth it - we'll see what the future
holds).

Fixes https://github.com/paritytech/polkadot-sdk/issues/1584
Fixes https://github.com/paritytech/polkadot-sdk/issues/2055

---------

Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: Branislav Kontur <[email protected]>

* Add CI to claim crates (#2299)

* doc(client/cli/src/arg_enums.rs): fix typo ✍️ (#2298)

* review-bot: trigger only on review approvals (#2289)

Moved the review event of review-bot to only be triggered in approvals.

Because we only update the required reviews when someone approves, this
will stop the bot from immediately requesting a new review when someone
comments or request changes as they should have been already notified in
the first batch.

* cumulus-pov-recovery: check pov_hash instead of reencoding data (#2287)

Collators were previously reencoding the available data and checking the
erasure root.
Replace that with just checking the PoV hash, which consumes much less
CPU and takes less time.

We also don't need to check the `PersistedValidationData` hash, as
collators don't use it.

Reason:
https://github.com/paritytech/polkadot-sdk/issues/575#issuecomment-1806572230

After systematic chunks recovery is merged, collators will no longer do
any reed-solomon encoding/decoding, which has proven to be a great CPU
consumer.

Signed-off-by: alindima <[email protected]>

* Fix `ecdsa_bls` verify in BEEFY primitives (#2066)

BEEFY ECDSA signatures are on keccak has of the messages. As such we can
not simply call

`EcdsaBlsPair::verify(signature.as_inner_ref(), msg,
self.as_inner_ref())`

because that invokes ecdsa default verification which perfoms blake2
hash which we don't want.

This bring up the second issue makes: This makes `sign` and `verify`
function in `pair_crypto` useless, at least for BEEFY use case.
Moreover, there is no obvious clean way to generate the signature given
that pair_crypto does not exposes `sign_prehashed`. You could in theory
query the keystore for the pair (could you?), invoke `to_raw` and
re-generate each sub-pair and sign using each. But that sounds extremely
anticlimactic and will be frow upon by auditors . So I appreciate any
alternative suggestion.

---------

Co-authored-by: Davide Galassi <[email protected]>
Co-authored-by: Robert Hambrock <[email protected]>

* Fix `expect_pallet` benchmarks not relaying on hard-coded `frame_system` dependency version (#2288)

## Problem/Motivation
The benchmark for the `ExpectPallet` XCM instruction uses a hard-coded
version `4.0.0` for the `frame_system` pallet. Unfortunately, this
doesn't work for the `polkadot-fellows/runtimes` repository, where we
use dependencies from `crates.io`, e.g.,
[frame-system::23.0.0.0](https://github.com/polkadot-fellows/runtimes/blob/dd7f86f0d50064481ed0b7c0218494a5cfad997e/relay/kusama/Cargo.toml#L83).

Closes: https://github.com/paritytech/polkadot-sdk/issues/2284 

## Solution
This PR fixes the benchmarks that require pallet information and enables
the runtime to provide the correct/custom pallet information. The
default implementation provides `frame_system::Pallet` with index `0`,
where the version is not hard-coded but read from the runtime.


## Local testing

Added log for `T::valid_pallet` to the benchmarks like:
```
let valid_pallet = T::valid_pallet();
log::info!(
	target: "frame::benchmark::pallet",
	"valid_pallet: {}::{}::{}::{}::{}",
	valid_pallet.index,
	valid_pallet.module_name,
	valid_pallet.crate_version.major,
	valid_pallet.crate_version.minor,
	valid_pallet.crate_version.patch,
);
```

Run benchmarks for `westend`:
```
cargo run --bin=polkadot --features=runtime-benchmarks -- benchmark pallet --steps=2 --repeat=1 --extrinsic=* --heap-pages=4096 --json-file=./bench.json --chain=westend-dev --template=./polkadot/xcm/pallet-xcm-benchmarks/template.hbs --pallet=pallet_xcm_benchmarks::generic --output=./polkadot/runtime/westend/src/weights/xcm
```

---

For actual `frame_system` version:
```
[package]
name = "frame-system"
version = "4.0.0-dev"
```

Log dump:
```
2023-11-13 12:56:45 Starting benchmark: pallet_xcm_benchmarks::generic::query_pallet    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0    
2023-11-13 12:56:45 Starting benchmark: pallet_xcm_benchmarks::generic::expect_pallet    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0    
2023-11-13 12:56:45 valid_pallet: 0::frame_system::4::0::0 
```


For changed `frame_system` version:
```
[package]
name = "frame-system"
version = "5.1.3-dev"
```

Log dump:
```
2023-11-13 12:51:51 Starting benchmark: pallet_xcm_benchmarks::generic::query_pallet    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3    
2023-11-13 12:51:51 Starting benchmark: pallet_xcm_benchmarks::generic::expect_pallet    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3    
2023-11-13 12:51:51 valid_pallet: 0::frame_system::5::1::3
```

## References

Closes: https://github.com/paritytech/polkadot-sdk/issues/2284

* Delete undecodable Westend Asset Hub `Balances::Hold` and `Nfts::ItemMetadataOf` (#2309)

Closes https://github.com/paritytech/polkadot-sdk/issues/2241

See issue comments for more details about this storage.

* Add details in `--dev` cli flag documentation (#2305)

add details in `--dev` flag to tell that it disables local peer
discovery

### Context

When adding automated end-to-end tests, we replaced `--dev` by 

```
`--chain=dev`, `--force-authoring`, `--rpc-cors=all`, `--alice`, and `--tmp` flags
```

as stated in the command line documentation. But the tests started
failing due to the nodes connecting to each other.

### Fix

This PR includes additional command line documentation to explain more
in detail what `--dev` flag inludes.

* xcm-emulator: add Rococo<>Westend bridge and add tests for assets transfers over the bridge (#2251)

- switch from Rococo<>Wococo to Rococo<>Westend bridge
- add bidirectional simple tests
- remove Wococo chains from xcm-emulator
- added tests for assets transfers over Rococo<>Westend bridge 

fixes https://github.com/paritytech/parity-bridges-common/issues/2405

* PVF: fix detection of unshare-and-change-root security capability (#2304)

* Add environment to claim workflow (#2318)

Turns out to access environment secrets the workflow must explicitly opt
in to the environment.

* chainHead: Support multiple hashes for `chainHead_unpin` method (#2295)

This PR adds support for multiple hashes being passed to the
`chainHeda_unpin` parameters.

The `hash` parameter is renamed to `hash_or_hashes` per
https://github.com/paritytech/json-rpc-interface-spec/pull/111.

While at it, a new integration test is added to check the unpinning of
multiple hashes. The API is checked against a hash or a vector of
hashes.

cc @paritytech/subxt-team

---------

Signed-off-by: Alexandru Vasile <[email protected]>

* chainHead: Remove `chainHead_genesis` method (#2296)

The method has been removed from the spec
(https://github.com/paritytech/json-rpc-interface-spec/tree/main/src),
this PR keeps the `chainHead` in sync with that change.

@paritytech/subxt-team

---------

Signed-off-by: Alexandru Vasile <[email protected]>

* Add simple collator election mechanism (#1340)

Fixes https://github.com/paritytech/polkadot-sdk/issues/106

Port of cumulus PR https://github.com/paritytech/cumulus/pull/2960

This PR adds the ability to bid for collator slots even after the max
number of collators have already registered. This eliminates the first
come, first served mechanism that was in place before.

Key changes:
- added `update_bond` extrinsic to allow registered candidates to adjust
their bonds in order to dynamically control their bids
- added `take_candidate_slot` extrinsic to try to replace an already
existing candidate by bidding more than them
- candidates are now kept in a sorted list in the pallet storage, where
the top `DesiredCandidates` out of `MaxCandidates` candidates in the
list will be selected by the session pallet as collators
- if the candidacy bond is increased through a `set_candidacy_bond`
call, candidates which don't meet the new bond requirements are kicked


# Checklist

- [ ] My PR includes a detailed description as outlined in the
"Description" section above
- [ ] My PR follows the [labeling
requirements](https://github.com/paritytech/polkadot-sdk/blob/master/docs/CONTRIBUTING.md#process)
of this project (at minimum one label for `T` required)
- [ ] I have made corresponding changes to the documentation (if
applicable)
- [ ] I have added tests that prove my fix is effective or that my
feature works (if applicable)
- [ ] If this PR alters any external APIs or interfaces used by
Polkadot, the corresponding Polkadot PR is ready as well as the
corresponding Cumulus PR (optional)

---------

Signed-off-by: georgepisaltu <[email protected]>

* Contracts: Bump contracts rococo (#2286)

* chainHead/tests: Fix clippy (#2325)

Remove the genesis hash from tests:
- Clippy was passing on the PR:
https://github.com/paritytech/polkadot-sdk/pull/2296
- Clippy fails on master:
https://gitlab.parity.io/parity/mirrors/polkadot-sdk/-/jobs/4328487

This was a race with merging:
https://github.com/paritytech/polkadot-sdk/pull/2295, which introduced
another test that used the `CHAIN_GENESIS`

Signed-off-by: Alexandru Vasile <[email protected]>

* change prepare worker to use fork instead of threads (#1685)

Co-authored-by: Marcin S <[email protected]>

* statement-distribution: support inactive local validator in grid (#1571)

Fixes #1437

Co-authored-by: Sophia Gold <[email protected]>

* add NodeFeatures field to HostConfiguration and runtime API (#2177)

Adds a `NodeFeatures` bitfield value to the runtime `HostConfiguration`,
with the purpose of coordinating the enabling of node-side features,
such as: https://github.com/paritytech/polkadot-sdk/issues/628 and
https://github.com/paritytech/polkadot-sdk/issues/598.
These are features that require all validators enable them at the same
time, assuming all/most nodes have upgraded their node versions.

This PR doesn't add any feature yet. These are coming in future PRs.

Also adds a runtime API for querying the state of the client features
and an extrinsic for setting/unsetting a feature by its index in the bitfield.

Note: originally part of:
https://github.com/paritytech/polkadot-sdk/pull/1644, but posted as
standalone to be reused by other PRs until the initial PR is merged

* Contracts expose pallet-xcm (#1248)

This PR introduces:
- XCM  host functions `xcm_send`, `xcm_execute`
- An Xcm trait into the config. that proxy these functions to to
`pallet_xcm`, or disable their usage by using `()`.
- A mock_network and xcm_test files to test the newly added xcm-related
functions.

---------

Co-authored-by: Keith Yeung <[email protected]>
Co-authored-by: Sasha Gryaznov <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: Alexander Theißen <[email protected]>

* pallet-xcm: use XcmTeleportFilter for teleported fees in reserve transfers (#2322)

Disallow reserve transfers that use teleportable fees if `(origin,
fees)` matches `XcmTeleportFilter`.

Add regression tests for filtering based on `XcmTeleportFilter` for both
`(limited_)reserve_transfer_assets()` and `(limited_)teleport_assets`
extrinsics.

* Unify `ChainSync` actions under one enum (follow-up) (#2317)

Get rid of public `ChainSync::..._requests()` functions and return all
requests as actions.

---------

Co-authored-by: Sebastian Kunert <[email protected]>

* [CI] Prepare CI for Merge Queues (#2308)

PR prepares CI to the GitHub Merge Queues. All github actions that were
running in PR adjusted so they can run in the merge queues. Zombienet
jobs will do nothing during PRs but they will run during merge queues.

Jobs that will be skipped during PR:
 - all zombienet jobs
 - all publish docker jobs

Jobs that will be skipped during merge queue:
 - check-labels
 - check-prdoc
 - pr-custom-review
 - review trigger

cc https://github.com/paritytech/ci_cd/issues/862

* Identity Deposits Relay to Parachain Migration (#1814)

The goal of this PR is to migrate Identity deposits from the Relay Chain
to a system parachain.

The problem I want to solve is that `IdentityOf` and `SubsOf` both store
an amount that's held in reserve as a storage deposit. When migrating to
a parachain, we can take a snapshot of the actual `IdentityInfo` and
sub-account mappings, but should migrate (off chain) the `deposit`s to
zero, since the chain (and by extension, accounts) won't have any funds
at genesis.

The good news is that we expect parachain deposits to be significantly
lower (possibly 100x) on the parachain. That is, a deposit of 21 DOT on
the Relay Chain would need 0.21 DOT on a parachain. This PR proposes to
migrate the deposits in the following way:

1. Introduces a new pallet with two extrinsics: 
- `reap_identity`: Has a configurable `ReapOrigin`, which would be set
to `EnsureSigned` on the Relay Chain (i.e. callable by anyone) and
`EnsureRoot` on the parachain (we don't want identities reaped from
there).
- `poke_deposit`: Checks what deposit the pallet holds (at genesis,
zero) and attempts to update the amount based on the calculated deposit
for storage data.
2. `reap_identity` clears all storage data for a `target` account and
unreserves their deposit.
3. A `ReapIdentityHandler` teleports the necessary DOT to the parachain
and calls `poke_deposit`. Since the parachain deposit is much lower, and
was just unreserved, we know we have enough.

One awkwardness I ran into was that the XCMv3 instruction set does not
provide a way for the system to teleport assets without a fee being
deducted on reception. Users shouldn't have to pay a fee for the system
to migrate their info to a more efficient location. So I wrote my own
program and did…
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 10, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 10, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 10, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 10, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 11, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 12, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 12, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 22, 2024
bkontur added a commit to bkontur/runtimes that referenced this pull request Jan 23, 2024
claravanstaden pushed a commit to Snowfork/runtimes that referenced this pull request Feb 5, 2024
github-merge-queue bot pushed a commit that referenced this pull request May 28, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes #598 
Also implements [RFC
#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
#598 (comment)

#### TODO:

- [x] [RFC #47](polkadot-fellows/RFCs#47)
- [x] merge #2177 and
rebase on top of those changes
- [x] merge #2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] #3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
hitchhooker pushed a commit to ibp-network/polkadot-sdk that referenced this pull request Jun 5, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes paritytech#598 
Also implements [RFC
paritytech#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](paritytech#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
paritytech#598 (comment)

#### TODO:

- [x] [RFC paritytech#47](polkadot-fellows/RFCs#47)
- [x] merge paritytech#2177 and
rebase on top of those changes
- [x] merge paritytech#2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
paritytech#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] paritytech#3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
TarekkMA pushed a commit to moonbeam-foundation/polkadot-sdk that referenced this pull request Aug 2, 2024
**Don't look at the commit history, it's confusing, as this branch is
based on another branch that was merged**

Fixes paritytech#598 
Also implements [RFC
paritytech#47](polkadot-fellows/RFCs#47)

## Description

- Availability-recovery now first attempts to request the systematic
chunks for large POVs (which are the first ~n/3 chunks, which can
recover the full data without doing the costly reed-solomon decoding
process). This has a fallback of recovering from all chunks, if for some
reason the process fails. Additionally, backers are also used as a
backup for requesting the systematic chunks if the assigned validator is
not offering the chunk (each backer is only used for one systematic
chunk, to not overload them).
- Quite obviously, recovering from systematic chunks is much faster than
recovering from regular chunks (4000% faster as measured on my apple M2
Pro).
- Introduces a `ValidatorIndex` -> `ChunkIndex` mapping which is
different for every core, in order to avoid only querying the first n/3
validators over and over again in the same session. The mapping is the
one described in RFC 47.
- The mapping is feature-gated by the [NodeFeatures runtime
API](paritytech#2177) so that it
can only be enabled via a governance call once a sufficient majority of
validators have upgraded their client. If the feature is not enabled,
the mapping will be the identity mapping and backwards-compatibility
will be preserved.
- Adds a new chunk request protocol version (v2), which adds the
ChunkIndex to the response. This may or may not be checked against the
expected chunk index. For av-distribution and systematic recovery, this
will be checked, but for regular recovery, no. This is backwards
compatible. First, a v2 request is attempted. If that fails during
protocol negotiation, v1 is used.
- Systematic recovery is only attempted during approval-voting, where we
have easy access to the core_index. For disputes and collator
pov_recovery, regular chunk requests are used, just as before.

## Performance results

Some results from subsystem-bench:

with regular chunk recovery: CPU usage per block 39.82s
with recovery from backers: CPU usage per block 16.03s
with systematic recovery: CPU usage per block 19.07s

End-to-end results here:
paritytech#598 (comment)

#### TODO:

- [x] [RFC paritytech#47](polkadot-fellows/RFCs#47)
- [x] merge paritytech#2177 and
rebase on top of those changes
- [x] merge paritytech#2771 and
rebase
- [x] add tests
- [x] preliminary performance measure on Versi: see
paritytech#598 (comment)
- [x] Rewrite the implementer's guide documentation
- [x] paritytech#3065 
- [x] paritytech/zombienet#1705 and fix
zombienet tests
- [x] security audit
- [x] final versi test and performance measure

---------

Signed-off-by: alindima <[email protected]>
Co-authored-by: Javier Viola <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T4-runtime_API This PR/Issue is related to runtime APIs. T8-polkadot This PR/Issue is related to/affects the Polkadot network.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants