Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FRAME] Prepare pallets for dynamic block durations #3268

Closed
ggwpez opened this issue Feb 8, 2024 · 23 comments
Closed

[FRAME] Prepare pallets for dynamic block durations #3268

ggwpez opened this issue Feb 8, 2024 · 23 comments
Assignees

Comments

@ggwpez
Copy link
Member

ggwpez commented Feb 8, 2024

Currently we just use System::block_number() in many pallet and derive a timestamp from it. This will not work anymore when parachains have changing block times, either from async backing or coretime.

Possible Solution

(the naming here are just placeholders)

  • Create a new system config item: BlockNumberProvider which can then either be configured to RelaychainDataProvider when its a parachain runtime or () for relay runtimes.
  • Add a function System::provided_block_number() -> Number
  • Add a function System::local_block_number() -> Number (for migrations and to avoid ambiguity)
  • Deprecate System::block_number()

We then need to adapt a ton of pallets and check whether their storage needs to be migrated.

@ggwpez ggwpez changed the title [FRAME] Prepare pallets for dynamic block times [FRAME] Prepare pallets for dynamic block durations Feb 8, 2024
@kianenigma
Copy link
Contributor

kianenigma commented Feb 8, 2024

This is more or less the proposed solution that I extracted from the call, mostly my attempt to capture an idea that @gupnik expressed:

diff --git a/substrate/frame/scheduler/src/lib.rs b/substrate/frame/scheduler/src/lib.rs
index e94f154eee..bcb5aff728 100644
--- a/substrate/frame/scheduler/src/lib.rs
+++ b/substrate/frame/scheduler/src/lib.rs
@@ -334,7 +334,9 @@ pub mod pallet {
 		#[pallet::weight(<T as Config>::WeightInfo::schedule(T::MaxScheduledPerBlock::get()))]
 		pub fn schedule(
 			origin: OriginFor<T>,
-			when: BlockNumberFor<T>,
+			// we provide this type to all dispatchables wishing to reference the future. Later on,
+			// `fn passed` can be used to check if this time is already passed.
+			when: T::RuntimeTime,
 			maybe_periodic: Option<schedule::Period<BlockNumberFor<T>>>,
 			priority: schedule::Priority,
 			call: Box<<T as Config>::RuntimeCall>,
diff --git a/substrate/frame/system/src/lib.rs b/substrate/frame/system/src/lib.rs
index 069217bcee..ab725a7dd5 100644
--- a/substrate/frame/system/src/lib.rs
+++ b/substrate/frame/system/src/lib.rs
@@ -570,6 +570,64 @@ pub mod pallet {
 
 		/// The maximum number of consumers allowed on a single account.
 		type MaxConsumers: ConsumerLimits;
+
+		/// Something that can represent a notion of time within this runtime.
+		///
+		/// It can be one's block number, timestamp or similar, depending on wether this is being
+		/// used in a parachain or relay chain context, and with or without async backing.
+		///
+		/// Be aware that changing this type midflight probably has a lot of consequences.
+		type RuntimeTime: RuntimeTime;
+	}
+
+	/// An operation that can happen far in the future.
+	trait RuntimeTime:
+		// This type should be storage-friendly..
+		codec::Codec
+		// dispatchable	friendly..
+		+ Parameter
+		+ Member
+		// and compare-able ..
+		+ core::cmp::PartialOrd
+		+ core::cmp::Eq
+		// and subtract-able.
+		+ sp_runtime::traits::CheckedSub
+	{
+		/// Return the notion of time "now".
+		fn now() -> Self;
+
+		/// Just a shorthand for `now() >= other`.
+		fn passed(&self, other: &Self::Time) -> bool {
+			*self >= *other
+		}
+
+		/// Just a shorthand for `now() - other`.
+		fn remaining(&self, other: &Self::Time) -> Self::Time {
+			self.now() - *other
+		}
+	}
+
+	/// Use my own block number.
+	pub struct SelfBlockNumber<T>(BlockNumberFor<T>);
+	impl<T: Config> RuntimeTime for SelfBlockNumber<T> {
+		fn now() -> Self {
+			Self(Pallet::<T>::deprecated_dont_use_block_number())
+		}
+	}
+
+	/// TOOD: should be provided by parachain-system, not here.
+	pub struct RelayBlockNumber<T>(BlockNumberFor<T>);
+	impl<T: Config> RuntimeTime for RelayBlockNumber<T> {
+		fn now() -> Self {
+			unimplemented!("read from some hardcoded key?")
+		}
+	}
+
+	pub struct Timestamp<T>(u64);
+	impl<T: Config> RuntimeTime for Timestamp<T> {
+		fn now() -> Self {
+			unimplemented!("call into pallet-timestamp")
+		}
 	}
 
 	#[pallet::pallet]
@@ -869,7 +927,7 @@ pub mod pallet {
 	/// The current block number being processed. Set by `execute_block`.
 	#[pallet::storage]
 	#[pallet::whitelist_storage]
-	#[pallet::getter(fn block_number)]
+	#[pallet::getter(fn deprecated_dont_use_block_number)]
 	pub(super) type Number<T: Config> = StorageValue<_, BlockNumberFor<T>, ValueQuery>;
 
 	/// Hash of the previous block.

If it works, it looks elegant and future-proof to me, but I would be open to a simpler solution as well. @ggwpez's proposed solution seems more aligned with the goal of simplicity.

@ggwpez
Copy link
Member Author

ggwpez commented Feb 8, 2024

Yea the RuntimeTime (or Instant maybe) does still make sense to me. Its not prevented by the BlockNumberProvider proposal.

I dont know how much effort it is to refactor the pallets to use the new Instant type. Otherwise we can use BlockNumberProvider for legacy pallets to quickly port them and then the Instant for new pallets?
But if its easy to port them, then just having the Instant would be better i think.

@xlc
Copy link
Contributor

xlc commented Feb 9, 2024

I am not sure if we want a central config. In fact, a well written pallet should not be reading System::block_number and instead have its own config type to allow the runtime to specify the block number / timestamp. So this doesn't apply to all the well written pallets.

For other pallets, yeah, they need to migrate. But instead of migrate to System::provided_block_number or System::local_block_number, they really should be just switch to a BlockNumberProvider in config.

@ggwpez
Copy link
Member Author

ggwpez commented Feb 9, 2024

I am not sure if we want a central config. In fact, a well written pallet should not be reading System::block_number and instead have its own config type to allow the runtime to specify the block number / timestamp. So this doesn't apply to all the well written pallets.

But this will lead to all pallets having a BlockNumberProvider config item, which is why it could be de-duplicated by just putting into System.
It also ensure that all pallets use the same BlockNumberProvider, since otherwise it could lead to issues.

@ggwpez
Copy link
Member Author

ggwpez commented Feb 9, 2024

Actually this does not work since there can be multiple para blocks per relay block...
Gav mentioned that we could use the relay timeslot. So it would basically be Kians suggestion with the Opaque type.

@xlc
Copy link
Contributor

xlc commented Feb 10, 2024

But this will lead to all pallets having a BlockNumberProvider config item, which is why it could be de-duplicated by just putting into System.

No. Only pallets depends on BlockNumberProvider will need to add it. Many pallets don't need it. And explicit dependency is a good thing.

It also ensure that all pallets use the same BlockNumberProvider, since otherwise it could lead to issues.

I argue exact the opposite. For some pallet, it is perfectly fine to use local block number and for some, they should use relay, and for some others, they should use timestamp. It is just not possible to come up something that works for all the pallets.

@ggwpez
Copy link
Member Author

ggwpez commented Feb 10, 2024

I argue exact the opposite. For some pallet, it is perfectly fine to use local block number and for some, they should use relay, and for some others, they should use timestamp. It is just not possible to come up something that works for all the pallets.

Okay makes sense. Then i assume a generic InstantProvider (or TimestampProvider) on a per-pallet basis could work?
This could then be configured to use local blocknumer, relay blocknumber of some timestamp inherent. If you have something else in mind then please make a concrete proposal.
I will add a point to the next Fellowship call agenda.

@xlc
Copy link
Contributor

xlc commented Feb 11, 2024

Yes. I don’t see much changes needed in frame system. We just need to migrate each pallet one by one and maybe add some helpers.

@kianenigma
Copy link
Contributor

It is just not possible to come up something that works for all the pallets.

I see the point in this, yeah. I was hoping by making this be part of frame-system we can make the process of using it easier, but I don't see a better way for now either.

For such cases, there could perhaps be "opinionated" versions of frame-system that has all this stuff configured for you, and you won't need to infiltrate each pallet with new types to use common functionalities such as BlockNumber or even a basic currency.

@xlc
Copy link
Contributor

xlc commented Feb 11, 2024

Most of the pallet doesn’t need to access block number / timestamp so it should be perfectly fine to require few more lines for those

@ggwpez ggwpez mentioned this issue Feb 12, 2024
@ggwpez
Copy link
Member Author

ggwpez commented Feb 12, 2024

I put a draft up here: #3298
You can treat it as a scratch pad. Its just what i came up with now, so probably needs improvements.

Lets settle on the basic types first before implementing it in runtime/pallets.

fellowship-merge-bot bot pushed a commit to polkadot-fellows/runtimes that referenced this issue Jun 20, 2024
This is an excerpt from #266. It aims to enable 6-second block times for
`people` parachain only.

If I'm not missing anything, the `people` parachain is the only
parachain not affected by paritytech/polkadot-sdk#3268, and thus,
6-second block times may be enabled without breaking something.

This PR was tested locally using the `kusama-local` relay chain. The
time of the session within which the runtime upgrade was enacted
expectedly deviated, but other than that, no problems were observed.

---------

Co-authored-by: joe petrowski <[email protected]>
@xlc xlc mentioned this issue Aug 8, 2024
14 tasks
@gupnik
Copy link
Contributor

gupnik commented Sep 10, 2024

Saw that these are the pallets that use System::block_number.

I believe that these are the ones that need to be migrated to use the relay chain block number.

Already present on AH:

Others:

while these can continue to use the parachain block number and do not need a migration:

  • babe
  • beefy
  • broker (Already supports block number provider)
  • election-provider-multi-phase
  • executive
  • grandpa
  • im-online
  • insecure-randomness-collective-flip
  • merkle-mountain-range
  • migrations
  • mixnet
  • scheduler: Not needed as per @xlc's comment below
  • revive

Please call out if something looks incorrect @ggwpez @kianenigma @xlc @shawntabrizi

@xlc
Copy link
Contributor

xlc commented Sep 10, 2024

scheduler doesn’t need migration as it has to use local block number or need some major refactoring

@ggwpez
Copy link
Member Author

ggwpez commented Sep 10, 2024

I believe that these are the ones that need to be migrated to use the relay chain block number:

We have to prioritize them by need for Plaza, so please check what is currently on AssetHub and then check with Jan-Jan about the pallets that we want to move from the Relay.

scheduler doesn’t need migration as it has to use local block number or need some major refactoring

Yea we can probably do it later, since the scheduler relies on every block number being reached eventually.

@xlc
Copy link
Contributor

xlc commented Sep 17, 2024

We need to figure out a migration strategy first before committing to implement migrations, otherwise it will just be wasted work #5656 (comment)

@kianenigma
Copy link
Contributor

I think we should go with the migration and be over with it. Deploying some trick that assumes Para's block time a fixed function of the RC block time (para = rc / 2 + offset) is a time bomb that will never work well with elastic scaling.

For pallets that are currently in parachains, such as broker #5656 (comment), they should migrate their actual data to use the relay chain block number. For example a deployed version of multisig on a parachain should do the same.

The pallets that we intend to move from the Relay chain to asset hub won't need any migration, as they already move the relay chain block number, and in the future they will continue to use the relay chain block number.

@xlc
Copy link
Contributor

xlc commented Sep 17, 2024

I don't think you fully understand my suggestion. It will work regardless para local block time.
Bit more explanation:

Right now, all the code (i.e. runtime code and UI code) assumes parachain have ~12s block time (invariant 1).
We are planning to break such invariant and therefore need some migration.
One potential solution is to migrate to use relay block time, which is 6s (invariant 2).
This is fine for runtime as it never need to deal with historical data. So it only need to deal with current invariant and that's easy.
However, for UI, block explorer, analytic service etc, they need to deal with historical data. i.e. Before block X, use invariant 1, after block X, use invariant 2. That's not going to work. It is also significantly worse than not working as it will result in corrupted data if the indexer did not update before the migration is enacted. And how can they know on which block the invariant changed? which runtime release for all the chains? This is just a VERY bad idea.

So my suggestion is just ditch the original meaning of the block numbers and strictly follow invariant 1. They are no longer block numbers. They are just some number expected to be increment by 1 every ~12s and that's it. All the block number to time calculation logic will be compatible. The only broken thing is it is no longer a parachain block number (so does the previous solution) which may or may not be fine (case by case).

Maybe there are better ways but whatever it is, please make sure it will never result in corrupted data. For example, change the storage name will be fine by me. It is a fully breaking change but it will never result in unexpected values in db that could be hard to correct.

@joepetrowski
Copy link
Contributor

Responding to #5656 (comment) here...

I don't think this is a good approach because it puts a lower bound on the block time at 12s. If a chain uses multiple cores or does 500ms blocks then it's impossible to fit into this paradigm. To adapt, we would just have to change block number to something like timestamp (which may not be entirely bad, but we still have the same storage migration issue here).

However, for UI, block explorer, analytic service etc, they need to deal with historical data. i.e. Before block X, use invariant 1, after block X, use invariant 2. That's not going to work. It is also significantly worse than not working as it will result in corrupted data if the indexer did not update before the migration is enacted. And how can they know on which block the invariant changed? which runtime release for all the chains?

I'm not convinced. The context here is mostly about projecting future enactment times. Yes, these services need historical data, but things like vesting schedules, scheduler, proxy delays, etc. are all related to, "from this point in time, how far in the future do we expect X to happen". There are not many historical queries of this nature because most historical queries are about the state of something at the time, not how far away something is from that point in time.

@xlc
Copy link
Contributor

xlc commented Sep 18, 2024

I think lower bound of 12s is fine? because otherwise is using relaychain block number and the lower bound is 6s and in what case 6s is required and 12s is not enough?

Maybe we should talk to the actual service builder for their opinions? As they are the people going to be impacted by this decision. So that will be the teams building the core time UI and gov UI and wallets.

@joepetrowski
Copy link
Contributor

I think lower bound of 12s is fine? because otherwise is using relaychain block number and the lower bound is 6s and in what case 6s is required and 12s is not enough?

It's using 6s as a clock to allow things in the future. It's fundamentally different to what you are proposing by altering the meaning of block number. By using the RC block time, you can say, "I want this proxied call to expire in 10 minutes (100 RC blocks)". The parachain prescribing this could theoretically have 1,000 blocks in this timeframe, and blocks 990-1000 would all fall between RC blocks 99 and 100. As in, many parachain blocks could get the same result from block_number() with Relay Chain as the provider. But saying that the block number type can only increment by 1 every 12 seconds means that that is not the case (and prevents faster block production).

Maybe we should talk to the actual service builder for their opinions? As they are the people going to be impacted by this decision. So that will be the teams building the core time UI and gov UI and wallets.

Sure :). cc @kianenigma @gupnik @seadanda

fellowship-merge-bot bot pushed a commit to polkadot-fellows/runtimes that referenced this issue Oct 1, 2024
Encointer communities could benefit from 6s block time because the
network is used for IRL point-of-sale or person-to-person transactions

Encointer is unaffected by
paritytech/polkadot-sdk#3268 as its pallets
have since ever based time on block timestamps

The parameters are copy-paste from people, as introduced by #308

---------

Co-authored-by: joe petrowski <[email protected]>
@muharem
Copy link
Contributor

muharem commented Oct 2, 2024

I’d like to summarize one option available to us. I’ve chosen the simplest solution that appears to meet our needs. Please share your feedback with a focus on finding a way forward. We need to start implementing a solution as part for the migration to Asset Hub. Our goal can be specified as follows:

  • ensure greater determinism (at least as good as we currently have) than the parachains block number clock, making it more useful for specific use cases like referenda periods (though it doesn’t have to be applicable to every use case).
  • support the migration from the Relay Chain to the Asset Hub (pallets like OpenGov, Staking, etc).

We probably want the solution to be:

  • correct;
  • simple (for both runtime engineers and service/client engineers);
  • minimal in added complexity or entropy from new concepts and logic.

Solution:

Use BlockNumberProvider as a configurable type parameter in pallets that need it. The provider can be set to follow either the Relay Chain block numbers or the Parachain's block numbers, depending on the requirement.

The Relay Chain block number clock is already in use and proven by time. We do not add anything new, except that some pallets instances on a Parachain might follow the Relay Chain block numbers instead of the Local Chain.

This approach simplifies state migration for pallets moving to the Asset Hub, as block numbers will not usually need to be mapped (although in some cases, shifting may be required to account for the time the migration took).

Pallets that rely on block number-based logic (generally within hooks) should adjust to operate based on conditions like greater/less than or equal instead only equal, since parachains may not execute state transitions with every Relay Chain block.

Do we miss any use case where this approach is not applicable?

@josepot @Tbaut @ERussel we need your feedback here

@kianenigma
Copy link
Contributor

Thank you for the summary @muharem! In short, I would name the action items as:

Per pallet:

  1. Add type BlockNumberProvider
  2. Explore if == needs to be >=/<= now
  3. Explore if any BlockNumber type is stored in storage. This might entail a migration

Ideas to improve this globally:

  1. Use a newtype to distinguish relay block number and para block number.
  2. Remove BlockNumber parameter from Hooks. It seems like a footgun, see here.

The list of pallets to be migrated is here. A few examples are in place to serve as inspiration.

@kianenigma
Copy link
Contributor

I would like to close this since the organization is better tracked in #6297

the content in this issue can remain useful as background knowledge.

@github-project-automation github-project-automation bot moved this from Backlog to Completed in parachains team board Oct 30, 2024
@github-project-automation github-project-automation bot moved this from Backlog to Done in Runtime / FRAME Oct 30, 2024
@github-project-automation github-project-automation bot moved this from In progress to Done in AHM Oct 30, 2024
github-merge-queue bot pushed a commit that referenced this issue Nov 11, 2024
…mber (#5656)

Based on #3331
Related to #3268

Implements migrations with customizable block number to relay height
number translation function.

Adds block to relay height migration code for rococo and westend.

---------

Co-authored-by: DavidK <[email protected]>
Co-authored-by: Kian Paimani <[email protected]>
EgorPopelyaev added a commit to EgorPopelyaev/polkadot-sdk that referenced this issue Nov 21, 2024
* Migrate pallet-transaction-storage and pallet-indices to benchmark v2 (paritytech#6290)

Part of:
paritytech#6202

---------

Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: GitHub Action <[email protected]>

* fix prospective-parachains best backable chain reversion bug (paritytech#6417)

Kudos to @EclesioMeloJunior for noticing it 

Also added a regression test for it. The existing unit test was
exercising only the case where the full chain is reverted

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* Remove network starter that is no longer needed (paritytech#6400)

# Description

This seems to be an old artifact of the long closed
paritytech/substrate#6827 that I noticed when
working on related code earlier.

## Integration

`NetworkStarter` was removed, simply remove its usage:
```diff
-let (network, system_rpc_tx, tx_handler_controller, start_network, sync_service) =
+let (network, system_rpc_tx, tx_handler_controller, sync_service) =
    build_network(BuildNetworkParams {
...
-start_network.start_network();
```

## Review Notes

Changes are trivial, the only reason for this to not be accepted is if
it is desired to not start network automatically for whatever reason, in
which case the description of network starter needs to change.

# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [ ] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
* External contributors: ask maintainers to put the right label on your
PR.

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* `fatxpool`: size limits implemented (paritytech#6262)

This PR adds size-limits to the fork-aware transaction pool.

**Review Notes**
- Existing
[`TrackedMap`](https://github.com/paritytech/polkadot-sdk/blob/58fd5ae4ce883f42c360e3ad4a5df7d2258b42fe/substrate/client/transaction-pool/src/graph/tracked_map.rs#L33-L41)
is used in internal mempool to track the size of extrinsics:

https://github.com/paritytech/polkadot-sdk/blob/58fd5ae4ce883f42c360e3ad4a5df7d2258b42fe/substrate/client/transaction-pool/src/graph/tracked_map.rs#L33-L41

- In this PR, I also removed the logic that kept transactions in the
`tx_mem_pool` if they were immediately dropped by the views. Initially,
I implemented this as an improvement: if there was available space in
the _mempool_ and all views dropped the transaction upon submission, the
transaction would still be retained in the _mempool_.

However, upon further consideration, I decided to remove this
functionality to reduce unnecessary complexity. Now, when all views drop
a transaction during submission, it is automatically rejected, with the
`submit/submit_and_watch` call returning `ImmediatelyDropped`.


Closes: paritytech#5476

---------

Co-authored-by: Sebastian Kunert <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* pallet-membership: Do not verify the `MembershipChanged` in bechmarks (paritytech#6439)

There is no need to verify in the `pallet-membership` benchmark that the
`MemembershipChanged` implementation works as the pallet thinks it
should work. If you for example set it to `()`, `get_prime()` will
always return `None`.

TLDR: Remove the checks of `MembershipChanged` in the benchmarks to
support any kind of implementation.

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>

* add FeeManager to pallet xcm (paritytech#5363)

Closes paritytech#2082

change send xcm to use `xcm::executor::FeeManager` to determine if the
sender should be charged.

I had to change the `FeeManager` of the penpal config to ensure the same
test behaviour as before. For the other tests, I'm using the
`FeeManager` from the `xcm::executor::FeeManager` as this one is used to
check if the fee can be waived on the charge fees method.

---------

Co-authored-by: Adrian Catangiu <[email protected]>
Co-authored-by: GitHub Action <[email protected]>

* Use relay chain block number in the broker pallet instead of block number (paritytech#5656)

Based on paritytech#3331
Related to paritytech#3268

Implements migrations with customizable block number to relay height
number translation function.

Adds block to relay height migration code for rococo and westend.

---------

Co-authored-by: DavidK <[email protected]>
Co-authored-by: Kian Paimani <[email protected]>

* migrate pallet-nft-fractionalization to benchmarking v2 syntax (paritytech#6301)

Migrates pallet-nft-fractionalization to benchmarking v2 syntax.

Part of:
* paritytech#6202

---------

Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>

* [pallet-revive] adjust fee dry-run calculation (paritytech#6393)

- Fix bare_eth_transact so that it estimate more precisely the
transaction fee
- Add some context to the build.rs to make it easier to troubleshoot
errors
- Add TransactionBuilder for the RPC tests.
- Improve error message, proxy rpc error from the node and handle
reverted error message
- Add logs in ReceiptInfo

---------

Co-authored-by: GitHub Action <[email protected]>

* NoOp Impl Polling Trait (paritytech#5311)

Adds NoOp implementation for the `Polling` trait and updates benchmarks
in `pallet-ranked-collective`.

---------

Co-authored-by: Oliver Tale-Yazdi <[email protected]>

* Migrate pallet-child-bounties benchmark to v2 (paritytech#6310)

Part of:

- paritytech#6202.

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>

* Introduce `ConstUint` to make dependent types in `DefaultConfig` more adaptable (paritytech#6425)

# Description

Resolves paritytech#6193

This PR introduces `ConstUint` as a replacement for existing constant
getter types like `ConstU8`, `ConstU16`, etc., providing a more flexible
and unified approach.

## Integration

This update is backward compatible, so developers can choose to adopt
`ConstUint` in new implementations or continue using the existing types
as needed.

## Review Notes

`ConstUint` is a convenient alternative to `ConstU8`, `ConstU16`, and
similar types, particularly useful for configuring `DefaultConfig` in
pallets. It enables configuring the underlying integer for a specific
type without the need to update all dependent types, offering enhanced
flexibility in type management.

# Checklist

* [x] My PR includes a detailed description as outlined in the
"Description" and its two subsections above.
* [ ] My PR follows the [labeling requirements](

https://github.com/paritytech/polkadot-sdk/blob/master/docs/contributor/CONTRIBUTING.md#Process
) of this project (at minimum one label for `T` required)
* External contributors: ask maintainers to put the right label on your
PR.
* [ ] I have made corresponding changes to the documentation (if
applicable)
* [ ] I have added tests that prove my fix is effective or that my
feature works (if applicable)

* Use type alias for transactions (paritytech#6431)

Very tiny change that helps with debugging of transactions propagation
by referring to the same type alias not only at receiving side, but also
on the sending size for symmetry

* [Release|CI/CD] Fix audiences changelog template (paritytech#6444)

This PR addresses an issue mentioned
[here](paritytech#6424 (comment)).
The problem was that when the prdoc file has two audiences, but only one
description like in
[prdoc_5660](https://github.com/paritytech/polkadot-sdk/blob/master/prdoc/1.16.0/pr_5660.prdoc)
it was ignored by the template.

* XCMv5: add ExecuteWithOrigin instruction (paritytech#6304)

Added `ExecuteWithOrigin` instruction according to the old XCM RFC 38:
polkadot-fellows/xcm-format#38.

This instruction allows you to descend or clear while going back again.

## TODO
- [x] Implementation
- [x] Unit tests
- [x] Integration tests
- [x] Benchmarks
- [x] PRDoc

## Future work

Modify `WithComputedOrigin` barrier to allow, for example, fees to be
paid with a descendant origin using this instruction.

---------

Signed-off-by: Adrian Catangiu <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>
Co-authored-by: Andrii <[email protected]>
Co-authored-by: Branislav Kontur <[email protected]>
Co-authored-by: Joseph Zhao <[email protected]>
Co-authored-by: Nazar Mokrynskyi <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Shawn Tabrizi <[email protected]>
Co-authored-by: command-bot <>

* rpc server: fix host filter for localhost on ipv6 (paritytech#6454)

This PR fixes an issue that I discovered using connecting to the RPC via
localhost using cURL, where cURL tries to connect to via ipv6 before
ipv4 when querying `localhost` which messed up the http host filter
whereas it would connect to the address `[::1]::9944 host_header:
localhost:9944` but the ipv6 interface only whitelisted `[::1]:9944`
which this fixes.

So let's whitelist all localhost interfaces to avoid such weird
edge-cases.

### Behavior before this PR

```bash
$ polkadot --chain westend-dev &
$ curl -v \
     -H 'Content-Type: application/json' \
     -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \
     http://localhost:9944
* Host localhost:9944 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:9944...
* Connected to localhost (::1) port 9944
> POST / HTTP/1.1
> Host: localhost:9944
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 50
>
< HTTP/1.1 403 Forbidden
< content-type: text/plain
< content-length: 41
< date: Tue, 12 Nov 2024 13:03:49 GMT
<
Provided Host header is not whitelisted.
* Connection #0 to host localhost left intact
```

### Behavior after this PR
```bash
$ polkadot --chain westend-dev &
➜ wasm-tests (update-artifacts-1731284930) ✗ curl -v \
     -H 'Content-Type: application/json' \
     -d '{"jsonrpc":"2.0","id":"id","method":"system_name"}' \
     http://localhost:9944
* Host localhost:9944 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:9944...
* Connected to localhost (::1) port 9944
> POST / HTTP/1.1
> Host: localhost:9944
> User-Agent: curl/8.5.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 50
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=utf-8
< vary: origin, access-control-request-method, access-control-request-headers
< content-length: 54
< date: Tue, 12 Nov 2024 13:02:57 GMT
<
* Connection #0 to host localhost left intact
{"jsonrpc":"2.0","id":"id","result":"Parity Polkadot"}%
```

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: command-bot <>

* [pallet-revive] eth-rpc fixes (paritytech#6453)

- Breaking down the integration-test into multiple tests
- Fix tx hash to use expected keccak-256
- Add option to ethers.js example to connect to westend and use a
private key

---------

Co-authored-by: GitHub Action <[email protected]>

* Remove debug message about pruning active leaves (paritytech#6440)

# Description

The debug message was added to identify a potential memory leak.
However, recent observations show that pruning works as expected.
Therefore, it is best to remove this line, as it generates quite
annoying logs.


## Integration

Doesn't affect downstream projects.

---------

Co-authored-by: GitHub Action <[email protected]>

* [Tx ext stage 2: 1/4] Add `TransactionSource` as argument in `TransactionExtension::validate` (paritytech#6323)

## Meta 

This PR is part of 4 PR:
* paritytech#6323
* paritytech#6324
* paritytech#6325
* paritytech#6326

## Description

One goal of transaction extension is to get rid or unsigned
transactions.
But unsigned transaction validation has access to the
`TransactionSource`.

The source is used for unsigned transactions that the node trust and
don't want to pay upfront.
Instead of using transaction source we could do: the transaction is
valid if it is signed by the block author, conceptually it should work,
but it doesn't look so easy.

This PR add `TransactionSource` to the validate function for transaction
extensions

* remove pallet::getter from pallet-staking (paritytech#6184)

# Description

Part of paritytech#3326
Removes all pallet::getter occurrences from pallet-staking and replaces
them with explicit implementations.
Adds tests to verify that retrieval of affected entities works as
expected so via storage::getter.

## Review Notes

1. Traits added to the `derive` attribute are used in tests (either
directly or indirectly).
2. The getters had to be placed in a separate impl block since the other
one is annotated with `#[pallet::call]` and that requires
`#[pallet::call_index(0)]` annotation on each function in that block. So
I thought it's better to separate them.

---------

Co-authored-by: Dónal Murray <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>

* Refactor pallet `society` (paritytech#6367)

- [x] Removing `without_storage_info` and adding bounds on the stored
types for pallet `society` - issue
paritytech#6289
- [x] Migrating to benchmarking V2 -
paritytech#6202

---------

Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Muharem <[email protected]>

* frame-benchmarking: Use correct components for pallet instances (paritytech#6435)

When using multiple instances of the same pallet, each instance was
executed with the components of all instances. While actually each
instance should only be executed with the components generated for the
particular instance. The problem here was that in the runtime only the
pallet-name was used to determine if a certain pallet should be
benchmarked. When using instances, the pallet name is the same for both
of these instances. The solution is to also take the instance name into
account.

The fix requires to change the `Benchmark` runtime api to also take the
`instance`. The node side is written in a backwards compatible way to
also support runtimes which do not yet support the `instance` parameter.

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: clangenb <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>

* Get rid of `libp2p` dependency in `sc-authority-discovery` (paritytech#5842)

## Issue
paritytech#4859 
## Description
This PR removes `libp2p` types in authority-discovery, and replace them
with network backend agnostic types from `sc-network-types`.
 The `sc-network` interface is therefore updated accordingly.

---------

Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Dmitry Markin <[email protected]>
Co-authored-by: Alexandru Vasile <[email protected]>

* backing: improve session buffering for runtime information (paritytech#6284)

## Issue
[[paritytech#3421] backing: improve session buffering for runtime
information](paritytech#3421)

## Description
In the current implementation of the backing module, certain pieces of
information, which remain unchanged throughout a session, are fetched
multiple times via runtime API calls. The goal of this task was to
introduce a local cache to store such session-stable information and
perform the runtime API call only once per session.

This PR implements caching specifically for the validators list, node
features, executor parameters, minimum backing votes threshold, and
validator-to-group mapping, which were previously fetched from the
runtime or computed each time `PerRelayParentState` was built. Now, this
information is cached and reused within the session.

## TODO
* [X] Create a separate struct for per-session caches;
* [X] Cache validators list;
* [X] Cache node features;
* [X] Cache executor parameters;
* [X] Cache minimum backing votes threshold;
* [X] Cache validator-to-group mapping;
* [X] Update tests to reflect these changes;
* [X] Add prdoc.

## For the next PR
Cache validator groups and any other session-stable data (if present).

* Add litep2p network protocol benches (paritytech#6455)

# Description

Add support to run networking protocol benchmarks with litep2p backend.

Now we can compare the work of both libp2p and litep2p backends for
notifications and request-response protocols.

Next step: extract worker initialization from the benchmark loop.

### Example run on local machine
<img width="916" alt="image"
src="https://github.com/user-attachments/assets/6bb9f90a-76a4-417e-b9d3-db27aa8a356f">


## Integration

Does not affect downstream projects.

## Review Notes


https://github.com/paritytech/polkadot-sdk/blob/d4d9502538e8a940b809ecc77843af3cea101e19/substrate/client/network/src/litep2p/service.rs#L510-L520

This method should be implemented to run request benchmarks.

---------

Co-authored-by: GitHub Action <[email protected]>

* Fixed bridges zombienet tests because of removed NetworkId::Rococo/Westend from xcm::v5 (paritytech#6465)

Closes: paritytech#6449

* Fix staking benchmark (paritytech#6463)

Found by @ggwpez 

Fix staking benchmark, error was introduced when migrating to v2:
paritytech#6025

---------

Co-authored-by: GitHub Action <[email protected]>

* add pipeline to build runtimes

* Follow up work on `TransactionExtension` - fix weights and clean up `UncheckedExtrinsic` (paritytech#6418)

Follow up to paritytech#3685
Partially fixes paritytech#6403

The main PR introduced bare support for the new extension version byte
as well as extension weights and benchmarking.

This PR:
- Removes the redundant extension version byte from the signed v4
extrinsic, previously unused and defaulted to 0.
- Adds the extension version byte to the inherited implication passed to
`General` transactions.
- Whitelists the `pallet_authorship::Author`, `frame_system::Digest` and
`pallet_transaction_payment::NextFeeMultiplier` storage items as they
are read multiple times by extensions for each transaction, but are hot
in memory and currently overestimate the weight.
- Whitelists the benchmark caller for `CheckEra` and `CheckGenesis` as
the reads are performed for every transaction and overestimate the
weight.
- Updates the umbrella frame weight template to work with the system
extension changes.
- Plans on re-running the benchmarks at least for the `frame_system`
extensions.

---------

Signed-off-by: georgepisaltu <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: gui <[email protected]>

* feat: add workflow to test readme generation (paritytech#6359)

# Description

Created a workflow to search for README.docify.md in the repo, and run
cargo build --features generate-readme in the dir of the file (assuming
it is related to a crate). If the git diff shows some output for the
README.md, then the file update wasn't pushed on the branch, and the
workflow fails.
Closes paritytech#6331

## Integration

Downstream projects that want to adopt this README checking workflow
should:

1. Copy the `.github/workflows/readme-check.yml` file to their
repository
2. Ensure any `README.docify.md` files in their project follow the
expected format
3. Implement the `generate-readme` feature flag in their Cargo.toml if
not already present

## Review Notes

This PR adds a GitHub Actions workflow that automatically verifies
README.md files are up-to-date with their corresponding README.docify.md
sources. Key implementation details:

- The workflow runs on both PRs and pushes to main
- It finds all `README.docify.md` files recursively in the repository
- For each file found:
- Builds the project with `--features generate-readme` in that directory
  - Checks if the README.md has any uncommitted changes
  - Fails if any README.md is out of sync

---------

Co-authored-by: Alexander Samusev <[email protected]>
Co-authored-by: Iulian Barbu <[email protected]>

* [pallet-revive] set logs_bloom (paritytech#6460)

Set the logs_bloom in the transaction receipt

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Cyrill Leutwiler <[email protected]>

* Support more types in TypeWithDefault (paritytech#6411)

# Description

When using `TypeWithDefault<u32, ..>` as the default nonce provider to
overcome the [replay
attack](https://wiki.polkadot.network/docs/transaction-attacks#replay-attack)
issue, it fails to compile due to `TypeWithDefault<u32, ..>:
TryFrom<u64>` is not satisfied (which is required by trait
`BaseArithmetic`).

This is because the blanket implementation `TryFrom<U> for T where U:
Into<T>` only impl `TryFrom<u16>` and `TryFrom<u8>` for `u32` since
`u32` only impl `Into` for `u16` and `u8` but not `u64`.

This PR fixes the issue by adding `TryFrom<u16/u32/u64/u128>` and
`From<u8/u16/u32/u64/u128>` impl (using macro) for
`TypeWithDefault<u8/u16/u32/u64/u128, ..>` and removing the blanket impl
(otherwise the compiler will complain about conflicting impl), such that
`TypeWithDefault<u8/u16/u32/u64/u128, ..>: AtLeast8/16/32Bit` is
satisfied.

## Integration

This PR adds support to more types to be used with `TypeWithDefault`,
existing code that used `u64` with `TypeWithDefault` should not be
affected, an unit test is added to ensure that.

## Review Notes

This PR simply makes `TypeWithDefault<u8/u16/u32/u64/u128, ..>:
AtLeast8/16/32Bit` satisfied

---------

Signed-off-by: linning <[email protected]>

* [pallet-revive] use evm decimals in call host fn (paritytech#6466)

This PR update the pallet to use the EVM 18 decimal balance in contracts
call and host functions instead of the native balance.

It also updates the js example to add the piggy-bank solidity contract
that expose the problem

---------

Co-authored-by: GitHub Action <[email protected]>

* network/litep2p: Update litep2p network backend to version 0.8.1 (paritytech#6484)

This PR updates the litep2p backend to version 0.8.1 from 0.8.0.
- Check the [litep2p updates forum
post](https://forum.polkadot.network/t/litep2p-network-backend-updates/9973/3)
for performance dashboards.
- Check [litep2p release
notes](paritytech/litep2p#288)

The v0.8.1 release includes key fixes that enhance the stability and
performance of the litep2p library. The focus is on long-running
stability and improvements to polling mechanisms.

### Long Running Stability Improvements

This issue caused long-running nodes to reject all incoming connections,
impacting overall stability.

Addressed a bug in the connection limits functionality that incorrectly
tracked connections due for rejection.

This issue caused an artificial increase in inbound peers, which were
not being properly removed from the connection limit count.

This fix ensures more accurate tracking and management of peer
connections [paritytech#286](paritytech/litep2p#286).

### Polling implementation fixes

This release provides multiple fixes to the polling mechanism, improving
how connections and events are processed:
- Resolved an overflow issue in TransportContext’s polling index for
streams, preventing potential crashes
([paritytech#283](paritytech/litep2p#283)).
- Fixed a delay in the manager’s poll_next function that prevented
immediate polling of newly added futures
([paritytech#287](paritytech/litep2p#287)).
- Corrected an issue where the listener did not return Poll::Ready(None)
when it was closed, ensuring proper signal handling
([paritytech#285](paritytech/litep2p#285)).


### Fixed

- manager: Fix connection limits tracking of rejected connections
([paritytech#286](paritytech/litep2p#286))
- transport: Fix waking up on filtered events from `poll_next`
([paritytech#287](paritytech/litep2p#287))
- transports: Fix missing Poll::Ready(None) event from listener
([paritytech#285](paritytech/litep2p#285))
- manager: Avoid overflow on stream implementation for
`TransportContext`
([paritytech#283](paritytech/litep2p#283))
- manager: Log when polling returns Ready(None)
([paritytech#284](paritytech/litep2p#284))


### Testing Done

Started kusama nodes running side by side with a higher number of
inbound and outbound connections (500).
We previously tested with peers bounded at 50. This testing filtered out
the fixes included in the latest release.

With this high connection testing setup, litep2p outperforms libp2p in
almost every domain, from performance to the warnings / errors
encountered while operating the nodes.

TLDR: this is the version we need to test on kusama validators next

- Litep2p

Repo            | Count      | Level      | Triage report
-|-|-|-
polkadot-sdk | 409 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Peer disconnected with inflight after backoffs. Banned,
disconnecting. )
litep2p | 128 | warn | Refusing to add known address that corresponds to
a different peer ID
litep2p | 54 | warn | inbound identify substream opened for peer who
doesn't exist
polkadot-sdk | 7 | error | 💔 Called `on_validated_block_announce` with a
bad peer ID .*
polkadot-sdk    | 1          | warn       | ❌ Error while dialing .*: .*
polkadot-sdk | 1 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Invalid justification. Banned, disconnecting. )

- Libp2p

Repo            | Count      | Level      | Triage report
-|-|-|-
polkadot-sdk | 1023 | warn | 💔 Ignored block \(#.* -- .*\) announcement
from .* because all validation slots are occupied.
polkadot-sdk | 472 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Unsupported protocol. Banned, disconnecting. )
polkadot-sdk | 379 | error | 💔 Called `on_validated_block_announce` with
a bad peer ID .*
polkadot-sdk | 163 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Invalid justification. Banned, disconnecting. )
polkadot-sdk | 116 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Peer disconnected with inflight after backoffs. Banned,
disconnecting. )
polkadot-sdk | 83 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Same block request multiple times. Banned,
disconnecting. )
polkadot-sdk | 4 | warn | Re-finalized block #.* \(.*\) in the canonical
chain, current best finalized is #.*
polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Genesis mismatch. Banned, disconnecting. )
polkadot-sdk | 2 | warn | Report .*: .* to .*. Reason: .*. Banned,
disconnecting. ( Not requested block data. Banned, disconnecting. )
polkadot-sdk | 2 | warn | Can't listen on .* because: .*
polkadot-sdk    | 1          | warn       | ❌ Error while dialing .*: .*

---------

Signed-off-by: Alexandru Vasile <[email protected]>

* sp-trie: minor fix to avoid possible panic during node decoding (paritytech#6486)

# Description

This PR is a simple fix consisting of adding a check to the process of
decoding nodes of a storage proof to avoid panicking when receiving
badly-constructed proofs, returning an error instead.

This would close paritytech#6485

## Integration

No changes have to be done downstream, and as such the version bump
should be minor.

---------

Co-authored-by: Bastian Köcher <[email protected]>

* migrate pallet-nomination-pool-benchmarking to benchmarking syntax v2 (paritytech#6302)

Migrates pallet-nomination-pool-benchmarking to benchmarking syntax v2.

Part of:
* paritytech#6202

---------

Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>

* Migrate some pallets to benchmark v2 (paritytech#6311)

Part of paritytech#6202

---------

Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>

* Mention that account might still be required in doc for feeless if. (paritytech#6490)

Co-authored-by: Bastian Köcher <[email protected]>

* Pure state sync refactoring (part-1) (paritytech#6249)

This pure refactoring of state sync is preparing for
paritytech#4. As the rough plan
in
paritytech#4 (comment),
there will be two PRs for the state sync refactoring.

This first PR focuses on isolating the function
`process_state_key_values()` as the central point for storing received
state data in memory. This function will later be adapted to forward the
state data directly to the DB layer for persistent sync. A follow-up PR
will handle the encapsulation of `StateSyncMetadata` to support this
persistent storage.

Although there are many commits in this PR, each commit is small and
intentionally incremental to facilitate a smoother review, please review
them commit by commit. Each commit should represent an equivalent
rewrite of the existing logic, with one exception
paritytech@bb447b2,
which has a slight deviation from the original but is correct IMHO.
Please give this commit special attention during the review.

* [WIP][ci] Add worfklow stopper (paritytech#4551)

PR to implements workflow stopper - a custom solution to stop all
workflows if one of a required jobs failed. Previously we had the same
solution in GitLab and it saved a lot of compute. Because GitHub doesn't
have one united pipeline and instead it has multiple workflows something
like this has to be implemented.

cc paritytech/ci_cd#939

* Remove `ProspectiveParachainsMode` usage in backing subsystem (paritytech#6215)

Since async backing parameters runtime api is released on all networks
the code in backing subsystem can be simplified by removing the usages
of `ProspectiveParachainsMode` and keeping only the branches of the code
under `ProspectiveParachainsMode::Enabled`.

The PR does that and reworks the tests in mod.rs to use async backing.
It's a preparation for
paritytech#5079

---------

Co-authored-by: Alin Dima <[email protected]>
Co-authored-by: command-bot <>

* sp-runtime: Be a little bit more functional :D (paritytech#6526)

Co-authored-by: GitHub Action <[email protected]>

* `TransactionPool` API uses `async_trait` (paritytech#6528)

This PR refactors `TransactionPool` API to use `async_trait`, replacing
the` Pin<Box<...>>` pattern. This should improve readability and
maintainability.

The change is not altering any functionality.

---------

Co-authored-by: GitHub Action <[email protected]>

* sp-trie: correctly avoid panicking when decoding bad compact proofs (paritytech#6502)

# Description

Opening another PR because I added a test to check for my fix pushed in
paritytech#6486 and realized that for some reason I completely forgot how to code
and did not fix the underlying issue, since out-of-bounds indexing could
still happen even with the check I added. This one should fix that and,
as an added bonus, has a simple test used as an integrity check to make
sure future changes don't accidently revert this fix.

Now `sp-trie` should definitely not panic when faced with bad
`CompactProof`s. Sorry about that 😅

This, like paritytech#6486, is related to issue paritytech#6485

## Integration

No changes have to be done downstream, and as such the version bump
should be minor.

---------

Co-authored-by: Bastian Köcher <[email protected]>

* [pallet-revive] Update delegate_call to accept address and weight (paritytech#6111)

Enhance the `delegate_call` function to accept an `address` target
parameter instead of a `code_hash`. This allows direct identification of
the target contract using the provided address.
Additionally, introduce parameters for specifying a customizable
`ref_time` limit and `proof_size` limit, thereby improving flexibility
and control during contract interactions.

---------

Co-authored-by: Alexander Theißen <[email protected]>

* Fix metrics not shutting down if there are open connections (paritytech#6220)

Fix prometheus metrics not shutting down if there are open connections.
I fixed the same issue in the past but it broke again after a dependecy
upgrade.

See also:

paritytech#1637

* Validator Re-Enabling (paritytech#5724)

Aims to implement Stage 3 of Validator Disbling as outlined here:
paritytech#4359

Features:
- [x] New Disabling Strategy (Staking level)
- [x] Re-enabling logic (Session level)
- [x] More generic disabling decision output
- [x] New Disabling Events

Testing & Security:
- [x] Unit tests
- [x] Mock tests
- [x] Try-runtime checks
- [x] Try-runtime tested on westend snap
- [x] Try-runtime CI tests
- [ ] Re-enabling Zombienet Test (?)
- [ ] SRLabs Audit

Closes paritytech#4745 
Closes paritytech#2418

---------

Co-authored-by: ordian <[email protected]>
Co-authored-by: Ankan <[email protected]>
Co-authored-by: Tsvetomir Dimitrov <[email protected]>

* Migrate pallet-democracy benchmarks to benchmark v2 syntax (paritytech#6509)

# Description

Migrates pallet-democracy benchmarks to benchmark v2 syntax
This is Part of paritytech#6202

---------

Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: command-bot <>
Co-authored-by: Dmitry Markin <[email protected]>
Co-authored-by: Alexandru Vasile <[email protected]>

* Forward logging directives to Polkadot workers (paritytech#6534)

This pull request forward all the logging directives given to the node
via `RUST_LOG` or `-l` to the workers, instead of only forwarding
`RUST_LOG`.

---------

Co-authored-by: GitHub Action <[email protected]>

* Support block gap created by fast sync (paritytech#5703)

This is part 2 of
paritytech#5406 (comment),
properly handling the block gap generated during fast sync.

Although paritytech#5406 remains unresolved due to the known issues in paritytech#5663, I
decided to open up this PR earlier than later to speed up the overall
progress. I've tested the fast sync locally with this PR, and it appears
to be functioning well. (I was doing a fast sync from a discontinued
archive node locally, thus the issue highlighted in
paritytech#5663 (comment)
was bypassed exactly.)

Once the edge cases in paritytech#5663 are addressed, we can move forward by
removing the body attribute from the LightState block request and
complete the work on paritytech#5406. The changes in this PR are incremental, so
reviewing commit by commit should provide the best clarity.

cc @dmitry-markin

---------

Co-authored-by: Bastian Köcher <[email protected]>

* Pure state sync refactoring (part-2) (paritytech#6521)

This PR is the second part of the pure state sync refactoring,
encapsulating `StateSyncMetadata` as a separate entity. Now it's pretty
straightforward what changes are needed for the persistent state sync as
observed in the struct `StateSync`:

- `state`: redirect directly to the DB layer instead of being
accumulated in the memory.
- `metadata`: handle the state sync metadata on disk whenever the state
is forwarded to the DB, resume an ongoing state sync on a restart, etc.

---------

Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Alexandru Vasile <[email protected]>

* Add and test events in `pallet-conviction-voting` (paritytech#6544)

# Description

paritytech#4613 introduced events
for `pallet_conviction_voting::{vote, remove_vote, remove_other_vote}`.
However:
1. it did not include `unlock`
2. the pallet's unit tests were missing an update

## Integration

N/A

## Review Notes

This is as paritytech#6261 was, so
it is a trivial change.

* Increase default trie cache size to 1GiB (paritytech#6546)

The default trie cache size before was set to `64MiB`, which is quite
low to achieve real speed ups. `1GiB` should be a reasonable number as
the requirements for validators/collators/full nodes are much higher
when it comes to minimum memory requirements. Also the cache will not
use `1GiB` from the start and fills over time. The setting can be
changed by setting `--trie-cache-size BYTE_SIZE`.

---------

Co-authored-by: GitHub Action <[email protected]>

* Bridges testing improvements (paritytech#6536)

This PR includes:  
- Refactored integrity tests to support standalone deployment of
`pallet-bridge-messages`.
- Refactored the `open_and_close_bridge_works` test case to support
multiple scenarios, such as:
  1. A local chain opening a bridge.  
  2. Sibling parachains opening a bridge.  
  3. The relay chain opening a bridge.  
- Previously, we added instance support for `pallet-bridge-relayer` but
overlooked updating the `DeliveryConfirmationPaymentsAdapter`.

---------

Co-authored-by: GitHub Action <[email protected]>

* Migrate pallet-scheduler benchmark to v2 (paritytech#6292)

Part of:

- paritytech#6202.

---------

Signed-off-by: Xavier Lau <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>

* Removes constraint in `BlockNumberProvider` from treasury (paritytech#6522)

paritytech#3970 updated the
treasury pallet to support relay chain block number provider. However,
it added a constraint to the BlockNumberProvider to have the same block
number type as frame_system:

```rust
type BlockNumberProvider: BlockNumberProvider<BlockNumber = BlockNumberFor<Self>>;
```

This PR removes that constraint as suggested by @gui1117

* add profile

* exclude trigger on push

---------

Signed-off-by: Adrian Catangiu <[email protected]>
Signed-off-by: georgepisaltu <[email protected]>
Signed-off-by: linning <[email protected]>
Signed-off-by: Alexandru Vasile <[email protected]>
Signed-off-by: Xavier Lau <[email protected]>
Co-authored-by: Joseph Zhao <[email protected]>
Co-authored-by: Giuseppe Re <[email protected]>
Co-authored-by: GitHub Action <[email protected]>
Co-authored-by: Alin Dima <[email protected]>
Co-authored-by: Bastian Köcher <[email protected]>
Co-authored-by: Nazar Mokrynskyi <[email protected]>
Co-authored-by: Michal Kucharczyk <[email protected]>
Co-authored-by: Sebastian Kunert <[email protected]>
Co-authored-by: Adrian Catangiu <[email protected]>
Co-authored-by: jpserrat <[email protected]>
Co-authored-by: davidk-pt <[email protected]>
Co-authored-by: DavidK <[email protected]>
Co-authored-by: Kian Paimani <[email protected]>
Co-authored-by: clangenb <[email protected]>
Co-authored-by: PG Herveou <[email protected]>
Co-authored-by: Doordashcon <[email protected]>
Co-authored-by: Oliver Tale-Yazdi <[email protected]>
Co-authored-by: Xavier Lau <[email protected]>
Co-authored-by: Jeeyong Um <[email protected]>
Co-authored-by: Francisco Aguirre <[email protected]>
Co-authored-by: Andrii <[email protected]>
Co-authored-by: Branislav Kontur <[email protected]>
Co-authored-by: Shawn Tabrizi <[email protected]>
Co-authored-by: Niklas Adolfsson <[email protected]>
Co-authored-by: Andrei Eres <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Michał Gil <[email protected]>
Co-authored-by: Dónal Murray <[email protected]>
Co-authored-by: Muharem <[email protected]>
Co-authored-by: Kazunobu Ndong <[email protected]>
Co-authored-by: Dmitry Markin <[email protected]>
Co-authored-by: Alexandru Vasile <[email protected]>
Co-authored-by: Stephane Gurgenidze <[email protected]>
Co-authored-by: georgepisaltu <[email protected]>
Co-authored-by: Viraj Bhartiya <[email protected]>
Co-authored-by: Alexander Samusev <[email protected]>
Co-authored-by: Iulian Barbu <[email protected]>
Co-authored-by: Cyrill Leutwiler <[email protected]>
Co-authored-by: NingLin-P <[email protected]>
Co-authored-by: Tobi Demeco <[email protected]>
Co-authored-by: Guillaume Thiolliere <[email protected]>
Co-authored-by: Liu-Cheng Xu <[email protected]>
Co-authored-by: Tsvetomir Dimitrov <[email protected]>
Co-authored-by: Ermal Kaleci <[email protected]>
Co-authored-by: Alexander Theißen <[email protected]>
Co-authored-by: tmpolaczyk <[email protected]>
Co-authored-by: Maciej <[email protected]>
Co-authored-by: ordian <[email protected]>
Co-authored-by: Ankan <[email protected]>
Co-authored-by: Alexandre R. Baldé <[email protected]>
Co-authored-by: Xavier Lau <[email protected]>
Co-authored-by: gupnik <[email protected]>
github-merge-queue bot pushed a commit that referenced this issue Nov 22, 2024
Step in #3268

This PR adds the ability for these pallets to specify their source of
the block number. This is useful when these pallets are migrated from
the relay chain to a parachain and vice versa.

This change is backwards compatible:
1. If the `BlockNumberProvider` continues to use the system pallet's
block number
2. When a pallet deployed on the relay chain is moved to a parachain,
but still uses the relay chain's block number

However, we would need migrations if the deployed pallets are upgraded
on an existing parachain, and the `BlockNumberProvider` uses the relay
chain block number.

---------

Co-authored-by: Kian Paimani <[email protected]>
Krayt78 pushed a commit to Krayt78/polkadot-sdk that referenced this issue Dec 18, 2024
…tech#5723)

Step in paritytech#3268

This PR adds the ability for these pallets to specify their source of
the block number. This is useful when these pallets are migrated from
the relay chain to a parachain and vice versa.

This change is backwards compatible:
1. If the `BlockNumberProvider` continues to use the system pallet's
block number
2. When a pallet deployed on the relay chain is moved to a parachain,
but still uses the relay chain's block number

However, we would need migrations if the deployed pallets are upgraded
on an existing parachain, and the `BlockNumberProvider` uses the relay
chain block number.

---------

Co-authored-by: Kian Paimani <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Status: Done
Status: Completed
Development

No branches or pull requests

6 participants