-
Notifications
You must be signed in to change notification settings - Fork 739
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add storage bounds for pallet staking
and clean up deprecated non paged exposure storages
#6445
base: master
Are you sure you want to change the base?
Conversation
ExposurePage { page_total: Default::default(), others: vec![] } | ||
ExposurePage { | ||
page_total: Default::default(), | ||
others: WeakBoundedVec::force_from(vec![], None), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you want you can create a fn new
method on WeakBoundedVec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I just opened this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just call default
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, changed to default()
in 4b58f0a
{ | ||
/// Splits an `Exposure` into `PagedExposureMetadata` and multiple chunks of | ||
/// `IndividualExposure` with each chunk having maximum of `page_size` elements. | ||
pub fn into_pages( | ||
pub fn into_pages<MaxExposurePageSize>( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is only used one time and page_size
is equal to MaxExposurePageSize
. I think we should refactor: remove the argument page_size.
If needed we can always introduce a try_into_pages with an argument.
In general I feel it is not so good to add many implicit constraint, like here page_size
must be less than MaxExposurePageSize
otherwise some element will be ignored (with a log::error, but still).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in 489fc9d
@@ -1184,7 +1184,6 @@ impl parachains_slashing::Config for Runtime { | |||
ReportLongevity, | |||
>; | |||
type WeightInfo = parachains_slashing::TestWeightInfo; | |||
type BenchmarkingConfig = parachains_slashing::BenchConfig<200>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The overall observation is that we had to add temporarily a lot of bounds to pallets just for benchmarks, but as staking has now proper bounds for all its storage, they are not needed ✅
invulnerables: BoundedVec::try_from( | ||
initial_authorities.iter().map(|x| x.0.clone()).collect::<Vec<_>>() | ||
) | ||
.expect("Too many invulnerable validators!"), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Error message can be a bit more informative, hinting at which config should be tweaked if this expect
is ever reached.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
error message improved in 0d47dc5
@@ -1841,6 +1855,7 @@ pub mod migrations { | |||
parachains_shared::migration::MigrateToV1<Runtime>, | |||
parachains_scheduler::migration::MigrateV2ToV3<Runtime>, | |||
pallet_staking::migrations::v16::MigrateV15ToV16<Runtime>, | |||
pallet_staking::migrations::v17::MigrateV16ToV17<Runtime>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can check the logs for this migration in this CI job:
https://github.com/paritytech/polkadot-sdk/actions/runs/12412281493/job/34651846644?pr=6445
there seems to be some error-ish logs in there:
💸 Migration failed for ClaimedRewards from v16 to v17.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Solved with force_from(...)
for WeakBoundedVec
. I noticed that old individual validator exposures have up to 205 pages, while we would expect them to be at most 20 in the future @Ank4n
What do you think is the best approach: keep 20 as a limit for MaxRewardPagesPerValidator
for future items and force old items in a WeakBoundedVec
or increase the MaxRewardPagesPerValidator
limit to something like 250 to accomodate old items?
EDIT: I can also see a third option: merge pages for old items until they are less than 20.
@@ -745,6 +746,10 @@ impl pallet_staking::Config for Runtime { | |||
type WeightInfo = pallet_staking::weights::SubstrateWeight<Runtime>; | |||
type BenchmarkingConfig = StakingBenchmarkingConfig; | |||
type DisablingStrategy = pallet_staking::UpToLimitWithReEnablingDisablingStrategy; | |||
type MaxInvulnerables = ConstU32<20>; | |||
type MaxRewardPagesPerValidator = ConstU32<20>; | |||
type MaxValidatorsCount = ConstU32<300>; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other pallets like session, babe, grandpa and such also have a notion of MaxValidators
(expressed as MaxAuthorities
) that should be equal to the max validators of this pallet.
within this file, you can use one pub const MaxValidators: u32 = 300
everywhere to unify it.
Taking it a step further, you can expose this as a part of trait SessionManager
. this trait can expose a trat SessionManager { type MaxAuthorities = <set-by-staking-pallet> }
. Then, within the pallet-session
, who consumes SessionManager
, you can do:
fn integroty_check() {
/// A way to express within pallet-session that whoever implements `SessionManager` should have a compatible `MaxAuthorities`.
assert!(T::SessionManager::MaxAuthoritiet::get() >= T::MaxAuthorities::get())
}
This might be too much for this PR, but good for you to be familiar with the pattern.
Whenever they are multiple parameters within two pallets that have a logical dependency (they have to be equal, or one has to be larger than the other), you can remove the implicitness like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All GitHub workflows were cancelled due to failure one of the required jobs. |
This is part of #6289 and necessary for the Asset Hub migration.
Building on the observations and suggestions from #255 .
Changes
MaxInvulnerables
to boundInvulnerables
Vec ->BoundedVec
.westend
).MaxDisabledValidators
to boundDisabledValidators
Vec ->BoundedVec
MaxValidatorsCount
according to the current disabling strategy)ErasStakers
andErasStakersClipped
(see Tracker issue for cleaning up old non-paged exposure logic in staking pallet #433 )MaxExposurePageSize
to boundErasStakersPaged
mapping to exposure pages: eachExposurePage.others
Vec is turned into aWeakBoundedVec
to allow easy and quick changes to this boundMaxBondedEras
to boundBondedEras
Vec ->BoundedVec
BondingDuration::get() + 1
everywhere to include both time interval endpoints in [current_era - BondingDuration::get()
,current_era
]. Notice that this was done manually in every test and runtime, so I wonder if there is a better way to ensure thatMaxBondedEras::get() == BondingDuration::get() + 1
everywhere.MaxRewardPagesPerValidator
to boundClaimedRewards
Vec of pages ->WeakBoundedVec
WeakBoundedVec
to allow easy and quick changes to this parameterMaxValidatorsCount
optional storage item to addMaxValidatorsCount
mandatory config parameterEraRewardPoints.individual
BTreeMap ->BoundedBTreeMap
;TO DO
Slashing storage items will be bounded in another PR.
UnappliedSlashes
SlashingSpans