-
Notifications
You must be signed in to change notification settings - Fork 733
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoV Benchmarking Tracking Issue #398
Comments
IIUC, instead of I think runtime can provide a storage description and benchmarks will make use of it to make a proper estimation of the PoV size for calls. struct NodeDescription {
/// The maximum size of the value of the node
max_value_size: usize,
/// The depth of the node in the trie
max_node_depth: usize,
}
struct StorageDescription {
/// Associate a node description for all key starting with a specific prefix.
// E.g. vec![
// (
// twox128(System)++twox128(Account),
// Node {
// max_value_size: BoundedEncodedLen::of(AccountId),
// max_node_depth: log16(number_of_pallet_in_runtime) + log16(number_of_storage_in_pallet) + log16(number_of_key_in_account_storage)
// },
// )
// ]
prefix_description: Vec<(Prefix, NodeDescription)>,
/// Associate a node description for a specific key.
// E.g. for ":code:" key
key_description: Vec<(Key, NodeDescription)>,
} So we need a way to give the number of key in a storage. probably helped by the pallet macro with a new attribute |
EDIT: probably not needed as we can overestimate a bit and adjust once the transaction is proceed. And we can improve in the future. or maybe we want something more precise than
So that if the storage is queried multiple time the size related to the depth_before_prefix should not be added and the size related to the depth_after_prefix should amortised Maybe to allow even more description we should have something nested: |
I believe we only need that for
I think it's probably not worth getting too elaborate with the estimate. Given that it's cheap and easy to know the actual POV size and refund the weight once a tx has been processed, we can probably stick with a simple upper bound for POV size estimates. |
* Write header + messages in atomic batch call * Update writer test * indexing array -> append to array
Bumps [browserslist](https://github.com/browserslist/browserslist) from 4.15.0 to 4.16.6. - [Release notes](https://github.com/browserslist/browserslist/releases) - [Changelog](https://github.com/browserslist/browserslist/blob/main/CHANGELOG.md) - [Commits](browserslist/browserslist@4.15.0...4.16.6) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* working-millau-to-rialto-relay * fix Millau state root * properly fix synced+incomplete headers for sub2sub * removed wrong TODO * fmt + clippy * Update relays/headers-relay/src/headers.rs Co-authored-by: Hernando Castano <[email protected]> * moved SubstrateTransactionMaker definition * removed M from phantomdata * removed prune_synced_children * methods names as consts Co-authored-by: Hernando Castano <[email protected]>
This is a meta issue to track the things needed to add benchmarking support for PoV size, critical for launching Parachains on Polkadot.
MaxEncodedLen
requirement to storage (meta issue:MaxEncodedLen
tracking issue substrate#8719) Allow to specify some max number of values for storages in pallet macro. substrate#8735BoundedVec
to Storage Primitives BoundedVec + Shims for Append/DecodeLength substrate#8556Vec
toBoundedVec
(meta issue: [FRAME Core] Removewithout_storage_info
on pallets #323)(computation_weight, pov_weight)
and abstractions The rest of the way to Weights V2 (Tracking Issue) #256The text was updated successfully, but these errors were encountered: