-
Notifications
You must be signed in to change notification settings - Fork 101
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add encointer runtime #80
add encointer runtime #80
Conversation
* Switch to custom `BlockAnnounceData` Instead of sending a `SignedFullStatement` this switches to a new struct `BlockAnnounceData` that is being send alongside the block announcement. The signed full statement contains the candidate commitments, meaning it could be a full runtime upgrade that we send alongside a block announcement... To prevent this, we now only send the candidate receipt and the compact statement. * Update to latest polkadot
…ws#313) This changes the collator to print an error if the block that we can not find is the genesis block, instead of only logging this as a `debug` message. This should help people when they have registered the wrong genesis state on the relay chain.
* Complete telemetry for parachain & relaychain * Update Substrate & Polkadot
…olkadot-fellows#315) * Block announce validation should use the correct `Validation` result The error variant is just for internal errors and we need to return `Failure` always when the other node send us an invalid statement. * Update network/src/lib.rs Co-authored-by: Sergei Shulepov <[email protected]> Co-authored-by: Sergei Shulepov <[email protected]>
* MQC auth Update polkadot WIP * Update polkadot * Silly syntax errors * Fix typo * Leave some comments and docs * Apply suggestions from code review Co-authored-by: Bastian Köcher <[email protected]> * Introduce the MessageQueueChain structure * Move the HRMP channel relevance check below * Fix the `receive_hrmp_after_pause` test * ValidationData is passed by reference * Replace "to cumulus" with "to the collator" * Update the test so that they are same as in polkadot Co-authored-by: Bastian Köcher <[email protected]>
* Fix HRMP Location * Add call to xcm handler
…adot-fellows#317) * Update polkadot * Migrate all uses of MQC heads to merkle proofs * Mass rename `relay_parent_storage_root` * Restore parachain-system tests * Update polkadot and libp2p swarm for testing * Collapse match into an if let Co-authored-by: Bastian Köcher <[email protected]> * Fix compilation error in test-service Co-authored-by: Bastian Köcher <[email protected]>
* Move pallets to pallets folder and rename them * Move genesis file to service * Rename primitives to primitives-core * Delete cumulus-runtime * Move stuff to client folder and rename
This renames and moves the `SystemInherentData` into its own crate. The struct is now called `ParachainInherentData`. Besides moving the struct, this also moves the code for creating this struct into this crate.
* Update Substrate & Polkadot * Update chainspecs * Update again to fix test
* Move consensus to consensus-common crate * Move the parachain consensus out of the collator * Add first relay chain consensus stuff * Remove some warnings * Fix more stuff * Fix collator test * Change `ParachainConsensus` to take a mutable self * Make everything compile * Feedback
…kadot-fellows#342) Currently validation data is shared by using a well known key between the parachain system pallet and the validate block implementation. This pr changes this by passing the parachain system directly to the validate block implementation to make use of it. Besides that, we also store the validation params in some thread local variable to make it inspectable by parachain system. This moves the validation of validation data and validation params to the parachain system pallet directly, instead of having this hidden inside the validate block implementation. Fixes: paritytech/cumulus#217
* Update to latest Substrate and Polkadot * log::debug! * Add log dependecies to runtime * Comma * Fix tests
* Add a command to purge the relay chain only * WIP * Update rococo-parachains/src/cli.rs Co-authored-by: Bastian Köcher <[email protected]> * Move cli stuff to its own crate * Copyright dates * Test not working for some reason... * WIP * Revert "WIP" This reverts commit f97cd63742c7df822e4a6e52a29db5e0f56b7bfa. * Fix test to use provided relay chain * Apply suggestions from code review Co-authored-by: Bastian Köcher <[email protected]> * Add hint about which database could not be purged Co-authored-by: Bastian Köcher <[email protected]>
* Update Substrate & Polkadot * Remove unused code
* Support xcm local execution in xcm handler. * Add docs.
* Create builder for test nodes * Fix syncing issue * Adds missing file
Some of the CI migration errors imply that there are pallet migrations missing. Especially for the
I think we should not ignore this. |
/// Migrations to apply on runtime upgrade. | ||
pub type Migrations = ( | ||
// fixing the scheduler with a local migration is necessary because we have missed intermediate | ||
// migrations. the safest migration is, therefore, to clear all storage and bump StorageVersion | ||
migrations_fix::scheduler::v4::MigrateToV4<Runtime>, | ||
// also here we're actually too late with applying the migration. however, the migration does | ||
// work as-is. | ||
pallet_xcm::migration::v1::VersionUncheckedMigrateToV1<Runtime>, | ||
// balances are more tricky. We missed to do the migration to V1 and now we have inconsistent | ||
// state which can't be decoded to V0, yet the StorageVersion is V0. | ||
// the strategy is to: just pretend we're on V1 | ||
migrations_fix::balances::v1::BruteForceToV1<Runtime>, | ||
// then reset to V0 | ||
pallet_balances::migration::ResetInactive<Runtime>, | ||
//then apply the proper migration as we should have done earlier | ||
pallet_balances::migration::MigrateToTrackInactive<Runtime, xcm_config::CheckingAccount>, | ||
); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the CI in this repo have brought to light that we missed to perform storage migrations in the past. My guess is that this has gone unnoticed previously because of bogus feature propoagtion for try-runtime which has now been fixed with zepter (we did use try-runtime before and had migrations in our own pallets, but these have really gone unnoticed!!)
With these local extra migrations, we should be able to recover without damage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay thanks. It is rather simple to run the try-runtime check locally, in case you need to debug it (github is a bit slow currently).
https://github.com/paritytech/try-runtime-cli example here
--runtime ./target/production/wbuild/$PACKAGE_NAME/$RUNTIME_BLOB_NAME \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes and no. it runs through and succeeds, but it complains about a corrupt state for one key:
error: [2024-01-09T14:08:47Z ERROR frame_support::storage] old key failed to decode at [227, 143, 24, 82, 7, 73, 138, 187, 92, 33, 61, 15, 176, 89, 179, 216, 216, 55, 28, 116, 117, 23, 21, 29, 134, 63, 146, 111, 129, 151, 24, 223, 191, 178, 127, 30, 174, 240, 107, 185, 3, 0, 0, 0, 96, 15, 180, 143, 173, 7, 149, 151, 21, 112, 71, 148, 112, 212, 184, 115, 3, 1, 0]
converting key to hex: 0xE38F185207498ABB5C213D0FB059B3D8D8371C747517151D863F926F819718DFBFB27F1EAEF06BB903000000600FB48FAD0795971570479470D4B873030100
looking up onchain raw storage
value hex: 0x600a000000000000000003000000
hard to guess what key and what value that is but I'll investigate
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like something in PolkadotXcm::VersionNotifyTargets
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will ask in the XCM chat what this means since i have on clue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you. if we need to take action and modify the migrations, I suggest to let that be a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cleanest way is to remove this key altogether, however this would mean that the relay chain would not be notified of any XCM version changes on Encointer. To remedy this, the relay chain would have to resend a SubscribeVersion
instruction to Encointer.
@joepetrowski your remarks have been addressed. May I ask you to check and consider approving? |
This should have a CHANGELOG entry too (and being probably the first post-release commit, under a new |
/merge |
6334588
into
polkadot-fellows:main
Enabled Available commands
For more information see the documentation |
In fulfillment of RFC22, this PR adds the encointer runtime.
replacing #17
notes