diff --git a/text/0031-corejam.md b/text/0031-corejam.md new file mode 100644 index 000000000..b30c5d4c0 --- /dev/null +++ b/text/0031-corejam.md @@ -0,0 +1,681 @@ +# RFC-0031: CoreJam + +| | | +| --------------- | ------------------------------------------------------------------------------------------- | +| **Start Date** | 11 September 2023 | +| **Description** | Parallelised, decentralised, permissionless state-machine based on a multistage Collect-Refine-Join-Accumulate model. | +| **Authors** | Gavin Wood, Robert Habermeier, Bastian Köcher, Alistair Stewart | + + +## Summary + +This is a proposal to fundamentally alter the workload done on the Polkadot Relay-chain, both in terms of that which is done "on-chain", i.e. by all Relay Chain Validators (*Validators*) as well as that which is done "in-core", i.e. distributed among subsets of the Validators (*Backing Groups*). The target is to create a model which closely matches the underlying technical architecture and is both generic and permissionlessly extensible. + +In the proposed model, code is stored on-chain with two entry-points. Workloads are collated and processed in-core (and thus parallelized) using one entry-point, whereas the refined outputs of this processing are gathered together and an on-chain state-machine progressed according to the other. + +While somewhat reminiscent of the Map-Reduce paradigm, a comprehensive analogy cannot be taken: the in-core processing code does not transform a set of inputs, but is rather used to refine entirely arbitrary input data collected by some third-party. Instead, and in accordance, we term it *Collect-Refine-Join-Accumulate*. + +## Motivation + +Polkadot was originally designed as a means of validating state transitions of Webassembly-defined state machines known as *Parachain Validation Functions*. These state machines were envisioned to be long-lived (of the order of years) and transitioning continuously, at the "full capacity" of modern single-threaded hardware held in consensus over the internet, and in isolation to any other such state machines. + +Having actually built Polkadot, it became clear that the flexibility of the machinery implementing it allowed for a more diverse set of usage patterns and models. Parathreads, which came to be known as *On-Demand Parachains* (ODP) is one such model. This was underlined by other proposals to allow for a more decentralised administration of how the underlying Polkadot Core resource is procured, in particular *Agile Coretime*. + +More recently, the idea of having small to medium size programs executing without its own surrounding blockchain using only Relay-chain resources has been discussed in detail primarily around the *Coreplay* proposal. It therefore seems short-sighted to assume other models could not exist for utilizing the Relay-chain's "Core" resource. Therefore in much the same way that Agile Coretime originally strived to provide the most general model of *procuring* Relay-chain's Core resource, it seems sensible to strive to find a similarly general model for *utilizing* this resource, one minimizing the difference between the valuable function of the Validators and the service offered by Polkadot. + +Beyond delivering additional value through the increased potential for use-cases that this flexibility allows, our motivation extends to gaining stability: a future-proof platform allowing teams to build on it without fear of high maintenance burden, continuous bitrot or a technological rug-pull at some later date. Secondly, we are motivated by reducing barriers for new teams, allowing the Polkadot platform to harness the power of the crowd which permissionless systems uniquely enable. + +Being extensible, the Relay-chain becomes far more open to experimentation within this paradigm than the classical Parachain Proof-of-Validity and Validation Function as is the case at present. Being permissionless opens Polkadot experimentation to individuals and teams beyond those core developers. + +## Requirements + +In order of importance: + +1. The proposal must be compatible, in principle, with the preexisting parachain model. +2. The proposal must facilitate the implementation of Coreplay. +3. The proposal must be compatible with Agile Coretime, as detailed in RFC#0001. +4. Implementation of the proposal should need minimal changes to all production logic. +5. Utilization of Coretime must be accessible. +6. Utilization of Coretime must be permissionless. +7. The nature of Coretime should closely match the nature of resources generated by Polkadot. +8. Minimal opinionation should be introduced over the format, nature and usage of Coretime. + +## Stakeholders + +1. Anyone with exposure to the DOT token economy. +2. Anyone wanting to create decentralised/unstoppable/resilient applications. +3. Teams already building on Polkadot. + +## Explanation + +**CoreJam is a general model for utilization of Polkadot Cores. It is a mechanism by which Work Packages are communicated, authorized, computed and verified, and their results gathered, combined and accumulated into particular parts of the Relay-chain's state.** + +### Terminology + +Short forms of several common term are used here for brevity: + +- *RcBG*: Relay-chain Backing Group, a grouping of Relay-chain validators who act as the initial guarantors over the result of some computation done off-chain. +- *RcBA*: Relay-chain Block Author, the author of some particular block on the Relay-chain. + +### From Old to New + +The current specification of the Polkadot protocol, and with in the Relay-chain operation, is designed in line with the overall requirements of and terminology in the Polkadot (1.0) whitepaper. It incorporates first-class concepts including *Proof-of-Validity* and *Parachain Validation Function*. This will no longer be considered canonical for the Polkadot protocol. To avoid confusion, this design will be known as the *Fixed-function Parachains Model*, or the *Parachains Model* for short. + +Existing functionality relied upon by parachains will continue to be provided as a special case under a more general and permissionless model which is detailed presently and known as *CoreJam*. Transition of Polkadot to be in line with the present proposal will necessarily imply some minor alterations of formats utilized by Cumulus, Smoldot and other light-client APIs (see the section on Compatibility). However, much of the underlying logic (in particular, consensus, disputes and availability) is retained, though its application is generalised. This proposal will only make note of the expectations regarding the changes, and presumes continuation of all other logic. + +As part of this model, we introduce a number of new and interrelated concepts: *Work Package*, *Service*, *Work Item*, *Work Output*, *Work Result*, *Work Report*, *Guarantee* and *Service Trie*. + +Focussing on continuity and reuse of existing logic, it is unsurprising that many of these concepts already analogues in the Parachains model, albeit ones with a less general definition. While this mapping can be helpful to quickly create an approximate understanding of the new concepts for those already familiar with Polkadot, care must be taken not to inadvertantly make incorrect presumptions over exact details of their relationships, constraints, timing, provisions and APIs. Nonetheless, they are provided here for whatever help they may be. + +| CoreJam model | Legacy model | Context | +| --- | --- | --- | +| *Core Chain* | Relay-chain | Primary block-chain | +| *Work Package* | Proof-of-Validity | Untrusted data provided to RcBG | +| *Work Item* | Proof-of-Validity | State-transition inputs and witness | +| *Work Output* | Candidate | State-transition consequence | +| *Work Report* | Candidate | Target of attestation | +| *(Work Package) Attestation* | Attestation | Output signed in attestation | +| *Reporting* | Attestation | Placement of Attestation on-chain | +| *Integration* | Inclusion | Irreversible transition of state | +| *Builder* | Collator | Creator of data worthy of Attestation | + + +Additionally, the *Service Trie* has no immediate analogue, but may be considered as the Relay-chain state used to track the code and head data of the parachains. + +### Overview + +```rust +mod v0 { + const PROGRESS_WEIGHT_PER_PACKAGE: Weight = MAX_BLOCK_WEIGHT * 3 / 4; + type Service = u32; + type WorkPayload = Vec; + struct WorkItem { + service: Service, + payload: WorkPayload, + } + type MaxWorkItemsInPackage = ConstU32<4>; + type Authorization = Vec; + type HeaderHash = [u8; 32]; + /// Just a Blake2-256 hash of an EncodedWorkPackage. + type WorkPackageHash = [u8; 32]; + struct Context { + header_hash: HeaderHash, + state_root: Hash, // must be state root of block `header_hash` + beefy_root: Hash, // must be Beefy root of block `header_hash` + prerequisite: Option, + } + struct WorkPackage { + authorization: Authorization, + context: Context, + items: BoundedVec, + } +} +type MaxWorkPackageSize = ConstU32<5 * 1024 * 1024>; +struct EncodedWorkPackage { + version: u32, + encoded: BoundedVec, +} +impl TryFrom for v0::WorkPackage { + type Error = (); + fn try_from(e: EncodedWorkPackage) -> Result { + match e.version { + 0 => Self::decode(&mut &e.encoded[..]).map_err(|_| ()), + _ => Err(()), + } + } +} +``` + +A *Work Package* is an *Authorization* together with a series of *Work Items* and a context, limited in plurality, versioned and with a maximum encoded size. The Context includes an optional reference to a Work Package (`WorkPackageHash`) which limits the relative order of the Work Package (see **Work Package Ordering**, later). + +(The number of prerequisites of a Work Package is limited to at most one. However, we cannot trivially control the number of dependents in the same way, nor would we necessarily wish to since it would open up a griefing vector for misbehaving Work Package Builders who interrupt a sequence by introducing their own Work Packages with a prerequisite which is within another's sequence.) + +Work Items are a pair where the first item, `service`, itself identifies a pairing of code and state known as a *Service*; and the second item, `payload`, is a block of data which, through the aforementioned code, mutates said state in some presumably useful way. + +A Service has certain similarities to an object in a decentralized object-oriented execution environment (or, indeed, a smart contract), with the main difference being a more exotic computation architecture available to it. Similar to smart contracts, a Service's state is stored on-chain and transitioned only using on-chain logic. Also similar to a smart contract, resources used by a Service are strictly and deterministically constrained (using dynamic metering). Finally, Services, like smart contracts, are able to hold funds and call into each other synchronously. + +However, unlike for smart contracts, the on-chain transition logic of a Service (known as the *Accumulate* function) cannot directly be interacted with by actors external to the consensus environment. Concretely, they cannot be transacted with. Aside from the aforementioned inter-service calling, all input data (and state progression) must come as the result of a Work Item. A Work Item is a blob of data meant for a particular Service and crafted by some source external to consensus. It may be thought of as akin to a transaction. The Work Item is first processed *in-core*, which is to say on one of many secure and isolated virtual decentralized processors, yielding a distillate known as a *Work Result*. It is this Work Result which is collated together with others of the same service and Accumulated into the Service on-chain. + +In short, a Service may be considered as a kind of smart contract albeit one whose transaction data is first preprocessed with cheap decentralized compute power. + +Though this process happens entirely in consensus, there are two main consensus environments at play, _in-core_ and _on-chain_. We therefore partition the progress into two pairs of stages: Collect & Refine and Join & Accumulate. + +### Processing stages of a Work Package + +A Work Package has several stages of consensus computation associated with its processing, which happen as the system becomes more certain that it represents a correct and useful transition of its Service. + +While a Work Package is being built, the *Builder* must have access to the Relay-chain state in order to supply a specific *Context*. The Context dictates a certain *Scope* for the Work Package which is used by the Initial Validation to limit which Relay-chain blocks it may be processed on to a small sequence of a specific fork (which is yet to be built, presumably). We define the Relay-chain height at this point to be `T`. + +The first consensus computation to be done is the Work Package having its Authorization checked in-core, hosted by the Relay-chain Backing Group. If it is determined to be authorized, then the same environment hosts the Refinement of the Work Package into a series of Work Results. This concludes the bulk of the computation that the Work Package represents. We would assume that the Relay-chain's height at this point is shortly after the authoring time, `T+r` where `r` could be as low as zero. + +The second consensus computation happens on-chain at the behest of the Relay-chain Block Author of the time `T+r+i`, where `i` is generally zero or one, the time taken for the Work Results to be transported from within the Core to get to the gateway of being on-chain. The computation done essentially just ensures that the Work Package is still in scope and that the prerequisite it relies upon (if any) has been submitted ahead of it. This is called the on-chain *Reporting* (in the fixed-function parachains model, this is known as "attestation") and initiates the *Availability Protocol* for this Work Package once Relay-chain Validators synchronize to the block. This protocol guarantees that the Work Package will be made available for as long as we allow disputes over its validity to be made. + +At some point later `T+r+i+a` (where `a` is the time to distribute the fragments of the Work Package and report their archival to the next Relay-chain Block Author) the Availability Protocol has concluded and the Relay-chain Block Author of the time brings this information on-chain in the form of a bitfield in which an entry flips from zero to one. At this point we can say that the Work Report's Package is *Available*. + +Finally, at some point later still `T+r+i+a+o`, the Results of the Work Package are aggregated into groups of Services, and then *Pruned* and *Accumulated* into the common state of the Relay-chain. This process is known as *Integration* (in the fixed-function parachains model, this is known as "inclusion") and is irreversible within any given fork. Additional latency from being made *Available* to being *Integrated* (i.e. the `o` component) may be incurred due to ordering requirements, though it is expected to be zero in the variant of this proposal to be implemented initially (see **[Work Package Ordering](#work-package-ordering)**, later). + +### Collect-Refine + +The first two stages of the CoreJam process are *Collect* and *Refine*. *Collect* refers to the collection and authorization of Work Packages (collections of items together with an authorization) to utilize a Polkadot Core. *Refine* refers to the performance of computation according to the Work Packages in order to yield *Work Results*. Finally, each Backing Group member attests to a Work Package yielding a series of Work Results and these Attestations form the basis for bringing the Results on-chain and integrating them into the Polkadot (and in particular the Service's) state which happens in the following stages. + +#### Collection and `is_authorized` + +Collection is the means of a Backing Group member attaining a Work Package which is authorized to be performed on their assigned Core at the current time. Authorization is a prerequisite for a Work Package to be included on-chain. Computation of Work Packages which are not Authorized is not rewarded. Incorrectly attesting that a Work Package is authorized is a disputable offence and can result in substantial punishment. + +On arrival of a Work Package, after the initial decoding, a first check is that the `context` field is valid. This must reference a header hash of a known block which may yet be finalized and the additional fields must correspond to the data of that block. + +Agile Coretime (see [RFC#0001](https://github.com/polkadot-fellows/RFCs/blob/main/text/0001-agile-coretime.md)) prescribes two forms of Coretime sales: Instantaneous and Bulk. Sales of Instantaneous Coretime are no longer provided, leaving only Bulk Coretime. + +We introduce the concept of an *Authorizer* procedure, which is a piece of logic stored on-chain to which Bulk Coretime may be assigned. Assigning some Bulk Coretime to an Authorizer implies allowing any Work Package which passes that authorization process to utilize that Bulk Coretime in order to be submitted on-chain. It controls the circumstances under which RcBGs may be rewarded for evaluation and submission of Work Packages (and thus what Work Packages become valid to submit onto Polkadot). Authorization logic is entirely arbitrary and need not be restricted to authorizing a single collator, Work Package builder, parachain or even a single Service. + +An *Authorizer* is a parameterized procedure: + +```rust +type CodeHash = [u8; 32]; +type AuthParamSize = ConstU32<1024>; +type AuthParam = BoundedVec; +struct Authorizer { + code_hash: CodeHash, + param: AuthParam, +} +``` + +The `code_hash` of the Authorizer is assumed to be the hash of some code accessible in the Relay-chain's Storage pallet. The procedure itself is called the *Authorization Procedure* (`AuthProcedure`) and is expressed in this code (which must be capable of in-core VM execution). Its entry-point prototype is: + +```rust +fn is_authorized(param: &AuthParam, package: &WorkPackage, core_index: CoreIndex) -> bool; +``` + +This function is executed in a metered VM and subject to a modest system-wide limitation on execution time. If it overruns this limit or panicks on some input, it is considered equivalent to returning `false`. While it is mostly stateless (e.g. isolated from any Relay-chain state) it is provided with the package's `context` field in order to give information about a recent Relay-chain block. This allows it to be provided with a concise proof over some recent Relay-chain state. + +A single `Authorizer` value is associated with the index of the Core at a particular Relay-chain block and limits in some way what Work Packages may be legally processed by that Core. + +Since encoded `Authorizer` values may be fairly large (up to 1,038 bytes here), they may not be a drop-in replacement for the `ParaId`/`TaskId` used at present in the Agile Coretime interface. Because of this, we provide a lookup mechanism allowing a much shorter `AuthId` to be used within the Coretime scheduling messaging. Conveniently, this is precisely the same datatype size (32-bit) as a `ParaId`/`TaskId`. + +There is an Authorizations Pallet which stores the association. Adding a new piece of code is permissionless but requires a deposit commensurate with its size. + +```rust +type AuthId = u32; +type Authorizers = StorageMap; +``` + +An *Authorization* is simply a blob which helps the Authorizer recognize a properly authorized Work Package. No constraints are placed on Authorizers over how they may interpret this blob. Expected authorization content includes signatures, Merkle-proofs and more exotic succinct zero-knowledge proofs. + +_(Note: depending on future Relay-chain Coretime scheduling implementation concerns, a window of Relay-chain blocks)._ + +The need of validators to be rewarded for doing work they might reasonably expect to be useful competes with that of the Coretime procurers to be certain to get work done which is useful to them. In Polkadot 1.0, validators only get rewarded for PoVs ("work packages") which do not panic or overrun. This ensures that validators are well-incentivized to ensure that their computation is useful for the assigned parachain. This incentive model works adequately where all PVF code is of high quality and collators are few and static. + +However with this proposal (and even the advent of on-demand parachains), validators have little ability to identify a high-quality Work Package builder and the permissionless design means a greater expectation of flawed code executing in-core. Because of this, we make a slightly modified approach: Work Packages must have a valid Authorization, i.e. the Coretime-assigned `is_authorized` returns `true` when provided with the Work Package. However, Validators get rewarded for *any* such authorized Work Package, even one which ultimately panics or overruns on its evaluation. + +This ensures that Validators do a strictly limited amount of work before knowing whether they will be rewarded and are able to discontinue and attempt other candidates earlier than would otherwise be the case. There is the possibility of wasting Coretime by processing Work Packages which result in error, but well-written authorization procedures can mitigate this risk by making a prior validation of the Work Items. + +### Refine + +The `refine` function is implemented as an entry-point inside a code blob which is stored on-chain and whose hash is associated with the Service. + +```rust +type ClassCodeHash = StorageMap; +``` + +```rust +struct PackageInfo { + package_hash: WorkPackageHash, + context: Context, + authorization: Authorization, + auth_id: Option, +} +type WorkOutputLen = ConstU32<4_096>; +type WorkOutput = BoundedVec; +fn refine( + payload: WorkPayload, + package_info: PackageInfo, +) -> WorkOutput; +``` + +Both `refine` and `is_authorized` are only ever executed in-core. Within this environment, we need to ensure that we can interrupt computation not long after some well-specified limit and deterministically determine when an invocation of the VM exhausts this limit. Since the exact point at which interruption of computation need not be deterministic, it is expected to be executed by a streaming JIT transpiler with a means of approximate and overshooting interruption coupled with deterministic metering. + +Several host functions (largely in line with the host functions available to Parachain Validation Function code) are supplied. One addition is: + +```rust +/// Determine the preimage of `hash` utilizing the Relay-chain Storage pallet. This must +/// always do the same thing for the same `context` regardless of the current state of the +/// chain. This is achieved through the usage (by the host) of the specialized Storage +/// pallet. +/// +/// It returns `u32::max_value()` in the case that the preimage is unavailable. Otherwise +/// it returns the length of the preimage and places the first bytes of the preimage into +/// `buffer`, up to a maximum of `buffer_len`. +fn lookup(hash: [u8; 32], buffer: *mut u8, buffer_len: u32) -> u32; +``` + +Other host functions will allow for the possibility of executing a WebAssembly payload (for example, a Parachain Validation Function) or instantiating and entering a subordinate RISCV VM (for example for Actor Progressions). + +When applying `refine` from the client code, we must allow for the possibility that the VM exits unexpectedly or does not end. Validators are always rewarded for computing properly authorized Work Packages, including those which include such broken Work Items. But they must be able to report their broken state into the Relay-chain in order to collect their reward. Thus we define a type `WorkResult`: + +```rust +enum WorkError { + Timeout, + Panic, +} +struct WorkResult { + service: Service, + item_hash: [u8; 32], + result: Result, + weight: Weight, +} +fn apply_refine(item: WorkItem) -> WorkResult; +``` + +The amount of weight used in executing the `refine` function is noted in the `WorkResult` value, and this is used later in order to help apportion on-chain weight (for the Join-Accumulate process) to the Services whose items appear in the Work Packages. + +```rust +/// Secure refrence to a Work Package. +struct WorkPackageSpec { + /// The hash of the SCALE encoded `EncodedWorkPackage`. + hash: WorkPackageHash, + /// The erasure root of the SCALE encoded `EncodedWorkPackage`. + root: ErasureRoot, + /// The length in bytes of SCALE encoded `EncodedWorkPackage`. + len: u32, +} +/// Execution report of a Work Package, mainly comprising the Results from the Refinement +/// of its Work Items. +struct WorkReport { + /// The specification of the underlying Work Package. + package_id: WorkPackageId, + /// The context of the underlying Work Package. + context: Context, + /// The Core index under which the Work Package was Refined to generate the Report. + core_index: CoreIndex, + /// The results of the evaluation of the Items in the underlying Work Package. + results: BoundedVec, +} +/// Multiple signatures are consolidated into a single Attestation in a space-efficient +/// manner using a `BitVec` to succinctly express which validators have attested. +struct Attestation { + /// The Work Report which is being attested. + report: WorkReport, + /// Which validators from the group have a signature in `attestations`. + validators: BitVec, + /// The signatures of the RcBG members set out in `validators` whose message is the + /// hash of the `report`. The order of the signatures is the same order as the validators appear in `validators`. + attestations: Vec, +} +``` + +Each Relay-chain block, every Backing Group representing a Core which is assigned work provides a series of Work Results coherent with an authorized Work Package. Validators are rewarded when they take part in their Group and process such a Work Package. Thus, together with some information concerning their execution context, they sign a *Report* concerning the work done and the results of it. This is also known as a *Candidate*. This signed Report is called an *Attestation*, and is provided to the Relay-chain block author. If no such Attestation is provided (or if the Relay-chain block author refuses to introduce it for Reporting), then that Backing Group is not rewarded for that block. + +The process continues once the Attestations arrive at the Relay-chain Block Author. + +### Join-Accumulate + +Join-Accumulate is the second major stage of computation and is independent from Collect-Refine. Unlike with the computation in Collect-Refine which happens contemporaneously within one of many isolated cores, the consensus computation of Join-Accumulate is both entirely synchronous with all other computation of its stage and operates within (and has access to) the same shared state-machine. + +Being *on-chain* (rather than *in-core* as with Collect-Refine), information and computation done in the Join-Accumulate stage is carried out (initially) by the Block Author and the resultant block evaluated by all Validators and full-nodes. Because of this, and unlike in-core computation, it has full access to the Relay-chain's state. + +The Join-Accumulate stage may be seen as a synchronized counterpart to the parallelised Collect-Refine stage. It may be used to integrate the work done from the context of an isolated VM into a self-consistent singleton world model. In concrete terms this means ensuring that the independent work components, which cannot have been aware of each other during the Collect-Refine stage, do not conflict in some way. Less dramatically, this stage may be used to enforce ordering or provide a synchronisation point (e.g. for combining entropy in a sharded RNG). Finally, this stage may be a sensible place to manage asynchronous interactions between subcomponents of a Service or even different Services and oversee message queue transitions. + +#### Reporting and Integration + +There are two main phases of on-chain logic before a Work Package's ramifications are irreversibly assimilated into the state of the (current fork of the) Relay-chain. The first is where the Work Package is *Reported* on-chain. This is proposed through an extrinsic introduced by the RcBA and implies the successful outcome of some *Initial Validation* (described next). This kicks-off an off-chain process of *Availability* which, if successful, culminates in a second extrinsic being introduced on-chain shortly afterwards specifying that the Availability requirements of the Work Report are met. + +Since this is an asynchronous process, there are no ordering guarantees on Work Reports' Availability requirements being fulfilled. There may or may not be provision for adding further delays at this point to ensure that Work Reports are processed according to strict ordering. See *Work Package Ordering*, later, for more discussion here. + +Once both Availability and any additional requirements are met (including ordering and dependencies, but possibly also including reevaluation of some of the Initial Validation checks), then the second phase is executed which is known as *Integration*. This is the irreversible application of Work Report consequences into the Service's State Trie and (via certain permissionless host functions) the wider state of the Relay-chain. Work Results are segregated into groups based on their Service, joined into a `Vec` and passed through the immutable Prune function and into the mutable Accumulate function. + +#### Initial Validation + +There are a number of Initial Validation requirements which the RcBA must do in order to ensure no time is wasted on further, possibly costly, computation. Since the same tests are done on-chain, then for a Block Author to expect to make a valid block these tests must be done prior to actually placing the Attestations in the Relay-chain Block Body. + +Firstly, any given Work Report must have enough signatures in the Attestation to be considered for Reporting on-chain. Only one Work Report may be considered for Reporting from each RcBG per block. + +Secondly, any Work Reports introduced by the RcBA must be *Recent*, defined as having a `context.header_hash` which is an ancestor of the RcBA head and whose height is less than `RECENT_BLOCKS` from the block which the RcBA is now authoring. + +```rust +const RECENT_BLOCKS: u32 = 16; +``` + +Thirdly, dependent elements of the Context (`context.state_root` and `context.beefy_root`) must correctly correspond to those on-chain for the block corresponding to the provided `context.header_hash`. For this to be possible, the Relay-chain is expected to track Recent state roots and beefy roots in a queue. + +Fourthly, the RcBA may not attempt to report multiple Work Reports for the same Work Package. Since Work Reports become inherently invalid once they are no longer *Recent*, then this check may be simplified to ensuring that there are no Work Reports of the same Work Package within any *Recent* blocks. + +Finally, the RcBA may not register Work Reports whose prerequisite is not itself Reported in *Recent* blocks. + +In order to ensure all of the above tests are honoured by the RcBA, a block which contains Work Reports which fail any of these tests shall panic on import. The Relay-chain's on-chain logic will thus include these checks in order to ensure that they are honoured by the RcBA. We therefore introduce the *Recent Reports* storage item, which retaining all Work Package hashes which were Reported in the *Recent* blocks: + +```rust +const MAX_CORES: u32 = 512; +/// Must be ordered. +type ReportSet = BoundedVec>; +type RecentReports = StorageValue>> +``` + +The RcBA must keep an up to date set of which Work Packages have already been Reported in order to avoid accidentally attempting to introduce a duplicate Work Package or one whose prerequisite has not been fulfilled. Since the currently authored block is considered *Recent*, Work Reports introduced earlier in the same block do satisfy the prerequisite of Work Packages introduced later. + +While it will generally be the case that RcBGs know precisely which Work Reports will have been introduced at the point that their Attestation arrives with the RcBA by keeping the head of the Relay-chain in sync, it will not always be possible. Therefore, RcBGs will never be punished for providing an Attestation which fails any of these tests; the Attestation will simply be kept until either: + +1. it stops being *Recent*; +2. it becomes Reported on-chain; or +3. some other Attestation of the same Work Package becomes Reported on-chain. + +#### Availability + +Once the Work Report of a Work Package is Reported on-chain, the Work Package itself must be made *Available* through the off-chain Availability Protocol, which ensures that any dispute over the correctness of the Work Report can be easily objectively judged by all validators. Being off-chain this is not block-synchronized and any given Work Package may take one or more blocks to be made Available or may even fail. + +Only once a Work Report's Work Package is made Available the processing continue with the next steps of Joining and Accumulation. Ordering requirements of Work Packages may also affect this variable latency and this is discussed later in the section **Work Package Ordering**. + +#### Weight Provisioning + +Join-Accumulate is, as the name suggests, comprised of two subordinate stages. Both stages involve executing code inside a VM on-chain. Thus code must be executed in a *metered* format, meaning it must be able to be executed in a sandboxed and deterministic fashion but also with a means of providing an upper limit on the amount of weight it may consume and a guarantee that this limit will never be breached. + +Practically speaking, we may allow a similar VM execution metering system similar to that for the `refine` execution, whereby we do not require a strictly deterministic means of interrupting, but do require deterministic metering and only approximate interruption. This would mean that full-nodes and Relay-chain validators could be made to execute some additional margin worth of computation without payment, though any attack could easily be mitigated by attaching a fixed cost (either economically or in weight terms) to an VM invocation. + +Each Service defines some requirements it has regarding the provision of on-chain weight. Since all on-chain weight requirements must be respected of all processed Work Packages, it is important that each Work Report does not imply using more weight than its fair portion of the total available, and in doing so provides enough weight to its constituent items to meet their requirements. + +```rust +struct WorkItemWeightRequirements { + prune: Weight, + accumulate: Weight, +} +type WeightRequirements = StorageMap; +``` + +Each Service has two weight requirements associated with it corresponding to the two pieces of permissionless on-chain Service logic and represent the amount of weight allotted for each Work Item of this service within in a Work Package assigned to a Core. + +The total amount of weight utilizable by each Work Package (`weight_per_package`) is specified as: + +```rust +weight_per_package = relay_block_weight * safety_margin / max_cores +``` + +`safety_margin` ensures that other Relay-chain system processes can happen and important transactions can be processed and is likely to be around 75%. + +A Work Report is only valid if all weight liabilities of all Work Items to be Accumulated fit within this limit: + +```rust +let total_weight_requirement = work_statement + .items + .map(|item| weight_requirements[item.service]) + .sum(|requirements| requirements.prune + requirements.accumulate); +total_weight_requirement <= weight_per_package +``` + +Because of this, Work Report builders must be aware of any upcoming alterations to `max_cores` and build Statements which are in accordance with it not at present but also in the near future when it may have changed. + +### Accumulate + +The next phase, which happens on-chain, is Accumulate. This governs the amalgamation of the Work Package Outputs calculated during the Refinement stage into the Relay-chain's overall state and in particular into the various Child Tries of the Services whose Items were refined. Crucially, since the Refinement happened in-core, and since all in-core logic must be disputable and therefore its inputs made *Available* for all future disputers, Accumulation of a Work Package may only take place *after* the Availability process for it has completed. + +The function signature to the `accumulate` entry-point in the Service's code blob is: + +```rust +fn accumulate(results: Vec<(Authorization, Vec<(ItemHash, WorkResult)>)>); +type ItemHash = [u8; 32]; +``` + +The logic in `accumulate` may need to know how the various Work Items arrived into a processed Work Package. Since a Work Package could have multiple Work Items of the same Service, it makes sense to have a separate inner `Vec` for Work Items sharing the Authorization (by virtue of being in the same Work Package). + +Work Items are identified by their Blake2-256 hash, known at the *Item Hash* (`ItemHash`). We provide both the Authorization of the Package and the constituent Work Item Hashes and their Results in order to allow the `refine` logic to take appropriate action in the case that an invalid Work Item was submitted (i.e. one which caused its Refine operation to panic or time-out). + +_(Note for later: We may wish to provide a more light-client friendly Work Item identifier than a simple hash; perhaps a Merkle root of equal-size segments.)_ + +There is an amount of weight which it is allowed to use before being forcibly terminated and any non-committed state changes lost. The lowest amount of weight provided to `accumulate` is defined as the number of `WorkResult` values passed in `results` to `accumulate` multiplied by the `accumulate` field of the Service's weight requirements. + +However, the actual amount of weight may be substantially more. Each Work Package is allotted a specific amount of weight for all on-chain activity (`weight_per_package` above) and has a weight liability defined by the weight requirements of all Work Items it contains (`total_weight_requirement` above). Any weight remaining after the liability (i.e. `weight_per_package - total_weight_requirement`) may be apportioned to the Services of Items within the Report on a pro-rata basis according to the amount of weight they utilized during `refine`. Any weight unutilized by Classes within one Package may be carried over to the next Package and utilized there. + +```rust +/// Simple 32 bit "result" type. `u32::max_value()` down to `u32::max_value() - 255` is used to indicate an error. `0` up to `u32::max_value() - 256` are used to indicate success and a scalar value. +pub type SimpleResult = u32; +/// Returns `u32::max_value()` in case that `key` does not exist. Otherwise, the size of +/// the storage entry is returned and a minimum of this value and `buffer_len` bytes are +/// written into `buffer` from the storage value of `key`. If `buffer_len` is zero, this +/// operation may be additionally optimized to avoid reading any value data. +fn get_work_storage(key: &[u8], buffer: *mut [u8], buffer_len: u32) -> SimpleResult; +fn checkpoint() -> Weight; +fn weight_remaining() -> Weight; +/// Returns `u32::max_value()` on failure, `0` on success. +fn set_work_storage(key: &[u8], value: &[u8]) -> SimpleResult; +fn remove_work_storage(key: &[u8]); +/// Returns `u32::max_value()` on failure, `0` on success. +fn set_validators(validator_keys: &[ValidatorKey]) -> SimpleResult; +/// Returns `u32::max_value()` on failure, `0` on success. +fn set_code(code: &[u8]) -> SimpleResult; +/// Returns `u32::max_value()` on failure, `0` on success. +fn assign_core( + core: CoreIndex, + begin: BlockNumber, + assignment: Vec<(CoreAssignment, PartsOf57600)>, + end_hint: Option, +) -> SimpleResult; +/// Returns `u32::max_value()` in case that the operation failed or the transfer returned +/// an error. +/// Otherwise, the size of `Vec` returned by the `on_transfer` entry-point entry is +/// returned and as much of the data from the `Vec` as possible is copied into the provided +/// `buffer`. +fn transfer( + destination: Service, + amount: u128, + memo: &[u8], + weight: Weight, + buffer_len: u32, + buffer: *mut [u8], +) -> SimpleResult; +``` + +Read-access to the entire Relay-chain state is allowed. No direct write access may be provided since `accumulate` is untrusted code. `set_work_storage` may fail if an insufficient deposit is held under the Service's account. + +`set_validator`, `set_code` and `assign_core` are all privileged operations and may only be called by pre-authorized Services. + +Full access to a child trie specific to the Service is provided through the `work_storage` host functions. Since `accumulate` is permissionless and untrusted code, we must ensure that its child trie does not grow to degrade the Relay-chain's overall performance or place untenable requirements on the storage of full-nodes. To this goal, we require an account sovereign to the Service to be holding an amount of funds proportional to the overall storage footprint of its Child Trie. `set_work_storage` may return an error should the balance requirement not be met. + +Host functions are provided allowing any state changes to be committed at fail-safe checkpoints to provide resilience in case of weight overrun (or even buggy code which panics). The amount of weight remaining may also be queried without setting a checkpoint. `Weight` is expressed in a regular fashion for a solo-chain (i.e. one-dimensional). + +Simple transfers of data and balance between Services are possible by the `transfer` function. This is an entirely synchronous function which transfers the execution over to a `destination` Service as well as the provided `amount` into their account. + +A new VM is set up with code according to the Service's `accumulate` code blob, but with a secondary entry point whose prototype is: + +```rust +fn on_transfer(source: Service, amount: u128, memo: Vec, buffer_len: u32) -> Result, ()>; +``` + +During this execution, all host functions above may be used except `checkpoint()`. The operation may result in error in which case all changes to state are reverted, including the balance transfer. (Weight is still used.) + +Other host functions, including some to access Relay-chain hosted services such as the Balances and Storage Pallet may also be provided commensurate with this executing on-chain. + +_(Note for discussion: Should we be considering light-client proof size at all here?)_ + +We can already imagine three kinds of Service: *Parachain Validation* (as per Polkadot 1.0), *Actor Progression* (as per Coreplay in a yet-to-be-proposed RFC) and Simple Ordering (placements of elements into a namespaced Merkle trie). Given how abstract the model is, one might reasonably expect many more. + +### Work Package Ordering + +At the point of Reporting of a Work Package (specifically, its Work Report) on-chain, it is trivial to ensure that the ordering respects the optional `prerequisite` field specified in the Work Package, since the RcBA need only avoid Registering any which do not have their prerequisite fulfilled recently. + +However, there is a variable delay between a Work Report first being introduced on-chain in the Reporting and its eventual Integration into the Service's State due to the asynchronous Availability Protocol. This means that requiring the order at the point of Reporting is insufficient for guaranteeing that order at the time of Accumulation. Furthermore, the Availability Protocol may or may not actually complete for any Work Package. + +Two alternatives present themselves: provide ordering only on a *best-effort* basis, whereby Work Reports respect the ordering requested in their Work Packages as much as possible, but it is not guaranteed. Work Reports may be Accumulated before, or even entirely without, their prerequisites. We refer to this *Soft-Ordering*. The alternative is to provide a guarantee that the Results of Work Packages will always be Accumulated no earlier than the Result of any prerequisite Work Package. As we are unable to alter the Availability Protocol, this is achieved through on-chain queuing and deferred Accumulation. + +Both are presented as reasonable approaches for this proposal, though the Soft-Ordering variant is expected to be part of any initial implementation since implementation is trivial. + +#### Soft-Ordering Variant + +In this alternative, actual ordering is only guaranteed going *into* the Availability Protocol, not at the point of Accumulation. + +The (on-chain) repercussion of the Availability Protocol completing for the Work Package is that each Work Result becomes scheduled for Accumulation at the end of the Relay-chain Block Execution along with other Work Results from the same Service. The Ordering of Reporting is replicated here for all Work Results present. If the Availability Protocol delays the Accumulation of a prerequisite Work Result, then the dependent Work Result may be Accumulated in a block prior to that of its dependency. It is assumed that the *Accumulation* logic will be able to handle this gracefully. + +It is also possible (though unexpected in regular operation) that Work Packages never complete the Availability Protocol. Such Work Packages eventually time-out and are discarded from Relay-chain state. + +#### Hard-Ordering Variant + +This alternative gives a guarantee that the order in which a Work Package's Items will be Accumulated will respect the stated prerequisite of that Work Package. It is more complex and relies on substantial off-chain logic to inform the on-chain logic about which Work Packages are Accumulatable. + +The (on-chain) repercussion of the Availability Protocol completing for the Work Package depends based on whether it has a prerequisite which is still pending Availability. Since we know that all prerequisite Work Packages must have entered the Availability Protocol due to the Initial Validation, if we are no longer tracking the Work Package and its Work Results on-chain we may safely assume that it is because we have Accumulated them and thus the Work Package is Available. + +Conversely, if the Availability process for the prerequisite Work Package has not yet concluded or has already failed, we ensure that we still explicitly retain a record of it for as long as is needed. + +Specifically, if Availability has not yet concluded, we append a hash of the Work Package to a *Queue of Available Dependents* keyyed by the prerequisite Work Package Hash. The bounding for this list may safely be quite large, and if it grows beyond the bound, the Work Results may be discarded. This could only happen if very many Work Packages are produced and processed with the same prerequisite at approximately the same point in time, and that prerequisite suffers delayed Availability yet the dependents do not: an unlikely eventuality. + +In the case that we are not retaining a record of the prerequisite Work Package and Work Results, we aggregate the Work Results ready for Accumulation. If there is a non-empty Queue of Available Dependents for this Work Package, we record the fact that this Work Package is *Now Accumulated* in a separate storage item (to circumvent a possibly expensive read/write). If there is not, then we do not record this and will effectively stop tracking the Work Package's Availability status on-chain. + +Finally, after all Availability notices have been processed, but before the Accumulation happens, the RcBA may, via an extrinsic, inform the chain of Work Packages which are prerequisites preventing available Work Packages from being Accumulated. This amounts to naming a Work Package which has both a Queue of Available Dependents and is Now Accumulated. This allows those Work Results to be dequeued and aggregated for Accumulation. + +If Availability suffers a time-out (and retrying is not an option), or if the prerequisite has suffered a time-out, then all dependent Work Packages must be discarded in a cascading manner, a potentially daunting proposition. In a manner similar to that above, the time-out itself is recorded in on-chain storage item which itself may only be removed after the latest time at which the newest possible prerequisites (given the Initial Validation) could become available. The RcBAs may then introduce extrinsics to remove these associated storage items in an iterative fashion without the Work Results becoming Accumulated. + +#### Discussion + +An initial implementation of this proposal would almost certainly provide only the soft ordering, since it practically a waypoint to get to the hard ordering implementation. + +Initial Validation are made on-chain prior to the Availabilty Protocol begins for any given Work Package. This ensures that the Work Package is still in scope $-$ i.e. recent enough and on the proper fork. However, this does not ensure that the Work Package is still in scope at the point that the Work Results are actually Accumulated. It is as yet unclear whether this is especially problematic. + +The same scope limit could be placed also on Accumulation; in neither variant does it introduce much additional complexity. In both cases it would require the same course of action as of Availability timing out without the possibility of retry. However whereas in the soft-variant we would not expect to see very different dynamics since one such time-out has no repurcissions beyond preventing the Accumulation of those Work Results, in the hard-ordering variant, it could mean a substantially greater occurance of the cascading failure logic calling into question the real purpose of the scoping: is it to protect the Validators from having to deal with indefinitely valid Work Packages or is it to protect the Accumulation logic from having to deal with the Results of older Work Packages? + +### Relay-chain Storage Pallet + +There is a general need to be able to reference large, immutable and long-term data payloads both on-chain and in-core. This is both the case for fixed-function logic such as fetching the VM code for `refine` and `accumulate` as well as from within Work Packages themselves. + +Owing to the potential for forks and disputes to happen beyond the scope of initial validation, there are certain quite subtle requirements over what data held on-chain may be utilized in-core. Because of this, it makes sense to have a general solution which is known to be safe to use in all circumstances. We call this solution the *Storage Pallet*. + +The Storage Pallet provides a simple API, accessible to untrusted code through host functions & extrinsics and to trusted Relay-chain code via a trait interface. + +```rust +trait Storage { + /// Immutable function to attempt to determine the preimage for the given `hash`. + fn lookup(hash: &[u8; 32]) -> Option>; + + /// Allow a particular preimage to be `provide`d. + /// Once provided, this will be available through `lookup` until + /// `unrequest` is called. + fn request(hash: &[u8; 32], len: usize) -> bool; + /// Remove request that some data be made available. If the data was never + /// available or the data will remain available due to another request, + /// then `false` is returned and `expunge` may be called immediately. + /// Otherwise, `true` is returned and `expunge` may be called in + /// 24 hours. + fn unrequest(hash: &[u8; 32]) -> bool; + + // Functions used by implementations of untrusted functions; such as + // extrinsics or host functions. + + /// Place a deposit in order to allow a particular preimage to be `provide`d. + /// Once provided, this will be available through `lookup` until + /// `unrequest_untrusted` is called. + fn request_untrusted(depositor: &AccountId, hash: &[u8; 32], len: usize); + /// Remove request that some data be made available. If the data was never + /// available or the data will remain available due to another request, + /// then `false` is returned and `expunge_untrusted` may be called immediately. + /// Otherwise, `true` is returned and `expunge_untrusted` may be called in + /// 24 hours. + fn unrequest_untrusted(depositor: &AccountId, hash: &[u8; 32]) -> bool; + + // Permissionless items utilizable directly by an extrinsic or task. + + /// Provide the preimage of some requested hash. Returns `Some` if its hash + /// was requested; `None` otherwise. + /// + /// Usually utilized by an extrinsic and is free if `Some` is returned. + fn provide(preimage: &[u8]) -> Option<[u8; 32]>; + /// Potentially remove the preimage of `hash` from the chain when it was + /// unrequested using `unrequest`. `Ok` is returned iff the operation is + /// valid. + /// + /// Usually utilized by a task and is free if it returns `Ok`. + fn expunge(hash: &[u8; 32]) -> Result<(), ()>; + /// Return the deposit associated with the removal of the request by + /// `depositor` using `unrequest_untrusted`. Potentially + /// remove the preimage of `hash` from the chain also. `Ok` is returned + /// iff the operation is valid. + /// + /// Usually utilized by a task and is free if it returns `Ok`. + fn expunge_untrusted(depositor: &AccountId, hash: &[u8; 32]) -> Result<(), ()>; + + /// Equivalent to `request` followed immediately by `provide`. + fn store(data: &[u8]) -> [u8; 32]; +} +``` + +Internally, data is stored with a reference count so that two separate usages of `store` need not be concerned about the other. + +Every piece of data stored for an untrusted caller requires a sizeable deposit. When used by untrusted code via a host function, the `depositor` would be set to an account controlled by the executing code (e.g. the Service's sovereign account). + +Removing data happens in a two-phase procedure; first the data is unrequested, signalling that calling `lookup` on its hash may no longer work (it may still work if there are other +requests active). 24 hours following this, the data is expunged with a second call which, actually removes the data from the chain assuming no other requests for it are active. + +Only once expunge is called successfuly is the deposit returned. If the data was never provided, or is additional requests are still active, then expunge may be called immediately after a successful unrequest. + +### Notes on Agile Coretime + +Crucially, a *Task* is no longer a first-class concept. Thus the Agile Coretime model, which in large part allows Coretime to be assigned to a Task Identifier from the Coretime chain, would need to be modified to avoid a hard dependency on this. + +In this proposal, we replace the concept of a Task with a more general ticketing system; Coretime is assigned to an *Authorizer* instead, a parameterized function. This would allow a succinct *Authorization* (i.e. a small blob of data) to be included in the Work Package which, when fed into the relevant Authorizer function could verify that some Work Package is indeed allowed to utilize that Core at (roughly) that time. A simple proof system would be a regular PKI signature. More complex proof systems could include more exotic cryptography (e.g. multisignatures or zk-SNARKs). + +In this model, we would expect any authorized Work Packages which panic or overrun to result in a punishment to the specific author by the logic of the Service. + +### Notes for migrating from a Parachain-centric model + +All Parachain-specific data held on the Relay-chain including the means of tracking the Head Data and Code would be held in the Parachains Service (Child) Trie. The Work Package would be essentially equivalent to the current PoV blob, though prefixed by the Service. `refine` would prove the validity of the parachain transition described in the PoV which is the Work Package. The Parachains Work Output would provide the basis for the input of what is currently termed the Paras Inherent. `accumulate` would identify and resolve any colliding transitions and manage message queue heads, much the same as the current hard-coded logic of the Relay-chain. + +We should consider utilizing the Storage Pallet for Parachain Code and store only a hash in the Parachains Service Trie. + +### Notes for implementing the Actor Progression model + +Actor code is stored in the Storage Pallet. Actor-specific data including code hash, VM memory hash and sequence number is stored in the Actor Service Trie under that Actor's identifier. The Work Package would include pre-transition VM memories of actors to be progressed whose hash matches the VM memory hash stored on-chain and any additional data required for execution by the actors (including, perhaps, swappable memory pages). The `refine` function would initiate the relevant VMs and make entries into those VMs in line with the Work Package's manifest. The Work Output would provide a vector of actor progressions made including their identifer, pre- and post-VM memory hashes and sequence numbers. The `accumulate` function would identify and resolve any conflicting progressions and update the Actor Service Trie with the progressed actors' new states. More detailed information is given in the Coreplay RFC. + +### UMP, HRMP and Work Output bounding + +At present, both HRMP (the stop-gap measure introduced in lieu of proper XCMP) and UMP, make substantial usage of the ability for parachains to include data in their PoV which will be interpreted on-chain by the Relay-chain. The current limit of UMP alone is 1MB. Even with the current amount of parachains, it is not possible for all parachains to be able to make use of these resources within the same block, and the difficult problem of apportioning resources in the case of contest is structurally unsolved and left for the RcBA to make an arbitrary selection. + +The present proposal brings soundness to this situation by limiting the amount of data which can arrive on the Relay-chain from each Work Item, and by extension from each Work Package. The specific limit proposed is 4KB per Work Item, which if we assume an average of two Work Items per Package and 250 cores, comes to a manageable 2MB and leaves plenty of headroom. + +However, this does mean that pre-existing usage of UMP and HRMP are impossible. In any case, UMP is removed entirely from the Service API. + +To make up for this change, all non-"kernel" Relay-chain functionality will exist within Services (parachains under CoreChains, or possibly even actors under CorePlay). This includes staking and governance functionality. The development and deployment of XCMP avoids the need to place any datagrams on the Relay-chain which are not themselves meant for interpretation by it. APIs are provided for the few operations remaining which the Relay-chain must provide (validator updates, code updates and core assignments) but may only be used by Services holding the appropriate privileges. Thus taken together, neither HRMP, UMP or DMP will exist. + +An initial and hybrid deployment of CoreJam could see the Work Output size limits temporarily increased for the Parachains Service to ensure existing use-cases do not suffer, but with a published schedule on reducing these to the eventual 4KB limits. This would imply the need for graceful handling by the RcBA should the aggregated Work Outputs be too large. + +### Notes on Implementation Order + +In order to ease the migration process from the current Polkadot on- and off-chain logic to this proposal, we can envision a partial implementation, or refactoring, which would facilitate the eventual proposal whilst remaining compatible with the pre-existing usage and avoid altering substantial code. + +We therefore envision an initial version of this proposal with minimal modifications to current code: + +1. Remain with Webassembly rather than RISC-V, both for Service logic and the subordinate environments which can be set up from Service logic. The introduction of Services is a permissioned action requiring governance intervention. Work Packages will otherwise execute as per the proposal. *Minor changes to the status quo.* +2. Attested Work Packages must finish running in time and not panic. Therefore `WorkResult` must have an `Infallible` error type. If an Attestation is posted for a Work Package which panics or times out, then this is a slashable offence. *No change to the status quo.* +3. There should be full generalization over Work Package contents, as per the proposal. Introduction of Authorizers, `refine`, `prune` and `accumulate`. *Additional code to the status quo.* + +To optimize the present situation, a number of "natively implemented", fixed-function registrations will be provided. The Service of index zero will be used to represent the Parachains Service and will have a "native" (i.e. within the Wasm runtime) implementation of `refine` and `accumulate`. Secondly, a fixed-function set of Auth IDs 9,999 and lower simply represent Authorizers which accept Work Packages which contain a single Work Item of the Parachains Service which pertain to progressing a parachain of ID equal to the Auth ID value. + +Later implementation steps would polish (1) to replace with RISC-V (with backwards compatibility) and polish (2) to support posting receipts of timed-out/failed Work Packages on-chain for RISC-V Services. + +A final transition may migrate the Parachains Service to become a regular permissionless Service module. + +## Performance, Ergonomics and Compatibility + +The present proposal is broadly compatible with the facilities of the Legacy Model pending the integration of a Service specific to Parachains. Unlike other Services, this is expected to be hard-coded into the Relay-chain runtime to maximize performance, compatibility and implementation speed. + +Certain changes to active interfaces will be needed. Firstly, changes will be needed for any software (such as _Cumulus_ and _Smoldot_) relying on particular Relay-chain state trie keys (i.e. storage locations) used to track the code and head-data of parachains, so that they instead query the relevant key within the Parachains Service Child Trie. + +Secondly, software which currently provides Proofs-of-Validity to Relay-chain Validators, such as _Cumulus_, would need to be updated to use the new Work Item/Work Package format. + +## Testing, Security and Privacy + +Standard Polkadot testing and security auditing applies. + +The proposal introduces no new privacy concerns. + +## Future Directions and Related Material + +We expect to see several Services being built shortly after CorePlay is delivered. + +## Drawbacks, Alternatives and Unknowns + +Important considerations include: + +1. In the case of composite Work Packages, allowing synchronous (and therefore causal) interactions between the Work Items. If this were to be the case, then some sort of synchronisation sentinel would be needed to ensure that should one subpackage result without the expected effects on its Service State (by virtue of the `accumulate` outcome for that subpackage), that the `accumulate` of any causally entangled subpackages takes appropriate account for this (i.e. by dropping it and not effecting any changes from it). + +2. Work Items may need some degree of coordination to be useful by the `accumulate` function of their Service. To a large extent this is outside of the scope of this proposal's computation model by design. Through the authorization framework we assert that it is the concern of the Service and not of the Relay-chain validators themselves. However we must ensure that certain requirements of the parachains use-case are practically fulfillable in *some* way. Within the legacy parachain model, PoVs: + 1. shouldn't be replayable; + 2. shouldn't require unbounded buffering in `accumulate` if things are submitted out-of-order; + 3. should be possible to evaluate for ordering by validators making a best-effort. + +## Prior Art and References + +None.