Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PoV Benchmarking Tracking Issue #398

Open
6 of 9 tasks
shawntabrizi opened this issue Apr 21, 2021 · 3 comments
Open
6 of 9 tasks

PoV Benchmarking Tracking Issue #398

shawntabrizi opened this issue Apr 21, 2021 · 3 comments
Assignees
Labels
I6-meta A specific issue for grouping tasks or bugs of a specific category.

Comments

@shawntabrizi
Copy link
Member

shawntabrizi commented Apr 21, 2021

This is a meta issue to track the things needed to add benchmarking support for PoV size, critical for launching Parachains on Polkadot.

@gui1117
Copy link
Contributor

gui1117 commented May 4, 2021

IIUC, instead of Generate "full storage" before running benchmarks we now prefer the runtime to give for each storage the depth of the node of the values, and the maximum encoded size of the value (related issue tracker paritytech/substrate#8719)

I think runtime can provide a storage description and benchmarks will make use of it to make a proper estimation of the PoV size for calls.
The storage description can look like this:

struct NodeDescription {
	/// The maximum size of the value of the node
	max_value_size: usize,
	/// The depth of the node in the trie
	max_node_depth: usize,
}

struct StorageDescription {
	/// Associate a node description for all key starting with a specific prefix.
	// E.g. vec![
	//   (
	//     twox128(System)++twox128(Account),
	//     Node {
	//       max_value_size: BoundedEncodedLen::of(AccountId),
	//       max_node_depth: log16(number_of_pallet_in_runtime) + log16(number_of_storage_in_pallet) + log16(number_of_key_in_account_storage)
	//     },
	//   )
	// ]
	prefix_description: Vec<(Prefix, NodeDescription)>,

	/// Associate a node description for a specific key.
	// E.g. for ":code:" key
	key_description: Vec<(Key, NodeDescription)>,
}

So we need a way to give the number of key in a storage. probably helped by the pallet macro with a new attribute #[pallet::max_size(N)] or something like this.

@gui1117
Copy link
Contributor

gui1117 commented May 4, 2021

EDIT: probably not needed as we can overestimate a bit and adjust once the transaction is proceed. And we can improve in the future.

or maybe we want something more precise than max_node_depth: log16(number_of_pallet_in_runtime) + log16(number_of_storage_in_pallet) + log16(number_of_key_in_account_storage) like:

depth_after_prefix: log16(number_of_key_in_account_storage)
depth_before_prefix: log16(number_of_pallet_in_runtime) + log16(number_of_storage_in_pallet)

So that if the storage is queried multiple time the size related to the depth_before_prefix should not be added and the size related to the depth_after_prefix should amortised

Maybe to allow even more description we should have something nested:
so that twox128(System) prefix has a depth, and contains multiple possible suffix (one for each storage), one of them is twox128(AccountId).
So that at the end if you query 2 values from system pallet but 2 different storage then you don't need to count 2 times for the node to get the twox128(System) prefix.

@gui1117 gui1117 closed this as completed May 4, 2021
@gui1117 gui1117 reopened this May 4, 2021
@coriolinus
Copy link

we need a way to give the number of key in a storage. probably helped by the pallet macro with a new attribute #[pallet::max_size(N)] or something like this.

I believe we only need that for StorageMap, StorageDoubleMap, StorageNMap, etc. Spitballing earlier, we had the idea that for backwards-compatibility, if a max_size attribute is not included, we could use a default value of 300_000. This should ease the migration.

maybe we want something more precise than max_node_depth: log16(number_of_pallet_in_runtime) + log16(number_of_storage_in_pallet) + log16(number_of_key_in_account_storage)

I think it's probably not worth getting too elaborate with the estimate. Given that it's cheap and easy to know the actual POV size and refund the weight once a tx has been processed, we can probably stick with a simple upper bound for POV size estimates.

@juangirini juangirini transferred this issue from paritytech/substrate Aug 24, 2023
@the-right-joyce the-right-joyce added I6-meta A specific issue for grouping tasks or bugs of a specific category. and removed J1-meta labels Aug 25, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
* Write header + messages in atomic batch call

* Update writer test

* indexing array -> append to array
helin6 pushed a commit to boolnetwork/polkadot-sdk that referenced this issue Feb 5, 2024
Bumps [browserslist](https://github.com/browserslist/browserslist) from 4.15.0 to 4.16.6.
- [Release notes](https://github.com/browserslist/browserslist/releases)
- [Changelog](https://github.com/browserslist/browserslist/blob/main/CHANGELOG.md)
- [Commits](browserslist/browserslist@4.15.0...4.16.6)

Signed-off-by: dependabot[bot] <[email protected]>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
jonathanudd pushed a commit to jonathanudd/polkadot-sdk that referenced this issue Apr 10, 2024
* working-millau-to-rialto-relay

* fix Millau state root

* properly fix synced+incomplete headers for sub2sub

* removed wrong TODO

* fmt + clippy

* Update relays/headers-relay/src/headers.rs

Co-authored-by: Hernando Castano <[email protected]>

* moved SubstrateTransactionMaker definition

* removed M from phantomdata

* removed prune_synced_children

* methods names as consts

Co-authored-by: Hernando Castano <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
I6-meta A specific issue for grouping tasks or bugs of a specific category.
Projects
Development

No branches or pull requests

5 participants