Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New NFT traits: granular and abstract interface #5620

Open
wants to merge 11 commits into
base: master
Choose a base branch
from

Conversation

mrshiposha
Copy link
Contributor

@mrshiposha mrshiposha commented Sep 6, 2024

This PR introduces a new set of traits that represent different asset operations in a granular and abstract way.

The new abstractions provide an interface for collections and tokens for use in general and XCM contexts.

To make the review easier and the point clearer, this PR's code was extracted from #4300 (which contains the new XCM adapters). The #4300 is now meant to become a follow-up to this one.

Note: Thanks to @franciscoaguirre for a very productive discussion in Matrix. His questions are used in the Q&A notes.

Motivation: issues of the existing traits v1 and v2

This PR is meant to solve several issues and limitations of the existing frame-support nonfungible traits (both v1 and v2).

Derivative NFTs limitations

The existing v1 and v2 nonfungible traits (both collection-less—"nonfungible.rs", singular; and in-collection—"nonfungibles.rs", plural) can create a new token only if its ID is already known.

Combined with the corresponding XCM adapters implementation for v1 collection-less, in-collection (and the unfinished one for v2), this means that, in general, the only supported derivative NFTs are those whose chain-local IDs can be derived by the Matcher and the NFT engine can mint the token with the provided ID. It is presumed the chain-local ID is derived without the use of storage (i.e., statelessly) because all the standard matcher's implementations aren't meant to look into the storage.

To implement an alternative approach where chain-local derivative IDs are derived statefully, workarounds are needed. In this case, a custom stateful Matcher is required, or the NFT engine must be modified if it doesn't support predefined IDs for new tokens.

It is a valid use case if a chain has exactly one NFT engine, and its team wants to provide NFT derivatives in a way consistent with the rest of the NFTs on this chain.
Usually, if a chain already supports NFTs (Unique Network, Acala, Aventus, Moonbeam, etc.), they use their own chain-local NFT IDs.
Of course, it is possible to introduce a separate NFT engine just for derivatives and use XCM IDs as chain-local IDs there.
However, if the chain has a related logic to the NFT engine (e.g., fractionalizing), introducing a separate NFT engine for derivatives would require changing the related logic or limiting it to originals.

Also, in this case, the clients would need to treat originals and derivatives differently, increasing their maintenance burden.

The more related logic for a given NFT engine exists on-chain, the more changes will be required to support another instance of the NFT engine for derivatives.

Q&A: AssetHub uses the two pallets approach local and foreign assets. Why is this not an issue there?

Since the primary goal of AssetHub (as far as I understand) is to host assets and not provide rich functionality around them (which is the task of other parachains), having a specialized NFT engine instance for derivatives is okay. Even if AssetHub would provide NFT-related operations (e.g., fractionalization), I think the number of different kinds of such operations would be limited, so it would be pretty easy to maintain them for two NFT engines. I even believe that supporting chain-local derivative IDs on AssetHub would be needlessly more complicated than having two NFT engines.

Q&A: New traits open an opportunity for keeping derivatives on the same pallet. Thus, things like NFT fractionalization are reused without effort. Does it make sense to fractionalize a derivative?

I think it makes sense. Moreover, it could be one of the reasons for employing reserve-based transfer for an NFT. Imagine a chain with no such functionality, and you have an NFT on that chain. And you want to fractionalize that NFT. You can transfer the NFT to another chain that provides NFT fractionalization. This way, you can model shared ownership of the original asset via its derivative. The same would be true for any NFT operation not provided by the chain where the NFT is located, while another chain can provide the needed functionality.

Another thing about chain-local NFT IDs is that an NFT engine could provide some guarantees about its NFT IDs, such as that they are always sequential or convey some information. The chain's team might want to do the same for derivatives. In this case, it might be impossible to derive the derivative ID from the XCM ID statelessly (so the workarounds would be needed).

The existing adapters and traits don't directly support all of these cases. Workarounds could exist, but using them will increase the integration cost, the review process, and maintenance efforts.

The Polkadot SDK tries to provide general interfaces and tools, so it would be good to provide NFT interfaces/tools that are consistent and easily cover more use cases.

Design issues

Lack of generality

The existing traits (v1 and v2) are too concrete, leading to code duplication and inconvenience.

For example, two distinct sets of traits exist for collection-less and in-collection NFTs. The two sets are nearly the same. However, having two sets of traits necessitates providing two different XCM adapters. For instance, this PR introduced the NonFungibleAdapter (collection-less). The description states that the NonFungibleAdapter "will be useful for enabling cross-chain Coretime region transfers, as the existing NonFungiblesAdapter1 is unsuitable for this purpose", which is true. It is unsuitable (without workarounds, at least).

The same will happen with any on-chain entity that wants to use NFTs via these interfaces. Hence, the very structure of the interfaces makes using NFTs as first-class citizens harder (due to code duplication). This is sad since NFTs could be utility objects similar to CoreTime regions. For instance, they could be various capability tokens, on-chain shared variables, in-game characters and objects, and all of that could interoperate.

Another example of this issue is the methods of collections, which are very similar to the corresponding methods of tokens: create_collection / mint_into, collection_attribute / attribute, and so on. In many ways, a collection could be considered a variant of a non-fungible token, so it shouldn't be surprising that the methods are analogous. Therefore, there could be a universal interface for these things.

Q&A: there's a lot of duplication between nonfungible and nonfungibles. The SDK has the same with fungible and fungibles. Is this also a problem with fungible tokens?

I could argue that it is also a problem for fungibles, but I believe they are okay as they are. Firstly, fungible tokens are a simpler concept since, in one way or another, they represent the money-like value abstraction. It seems the number of different kinds of related operations is bound (in contrast to NFTs, which could be various utility objects with different related operations, just like objects in OOP).

Also, not all things that induce duplication apply to fungible(s) traits. For example, "a fungible collection" can not be viewed as a "fungible asset"—that's impossible, so having additional methods for "fungible collections" is okay. But at the same time, any collection (fungible or not) can be viewed as an NFT. It's not a "token" in the strict sense, but it is a unique object. This is precisely what NFTs represent.
An NFT collection often has a similar interface to NFTs: create/transfer/destroy/metadata-related operations, etc.
Of course, collections can have more methods that make sense only for collections but not their tokens, but this doesn't cancel the fact that collections can be viewed as another "kind" of NFTs.

Secondly, the fungible(s) trait sets are already granular. For example, multiple Inspect and Mutate traits are categorized by operation kind. Here is the Inspect/Mutate for metadata and here is the separate traits for holds.
For comparison, the nonfungible(_v2)(s) trait sets have all the kinds of operations in uncategorized Inspect/Mutate/Transfer traits.

The fungible(s) traits are granular but not too abstract. I believe it is a good thing.
Using the abstract traits from this PR, even for fungibles, is possible, but I see no reason to do so. A more concrete interface for fungibles seems even better because the very notion of fungibles outlines the possible related operations.

Q&A: If it is not an issue for fungibles, why would this be an issue for NFTs?

Unlike fungibles, different NFTs could represent any object-like thing. Just like with objects in OOP, it is natural to expect them to have different inherent operations (e.g., different kinds of attributes, permission-based/role-based modification, etc.). The more abstract traits should help maintain interoperability between any NFT engine and other pallets. Even if we'd need some "adapters," they could be made easily because of the abstract traits.

An opinionated interface

Both v1 and v2 trait sets are opinionated.

The v1 set is less opinionated than v2, yet it also has some issues. For instance, why does the burn method provide a way to check if the operation is permitted, but transfer and set_attribute do not? In the transfer case, there is already an induced mistake in the XCM adapter. Even if we add an ownership check to all the methods, why should it be only the ownership check? There could be different permission checks. Even in this trait set, we can see that, for example, the destroy method for a collection takes a witness parameter additional to the ownership check.

The same goes for v2 and even more.

For instance, the v2 mint_into, among other things, takes deposit_collection_owner, which is an implementation detail of pallet-nfts that shouldn't be part of a general interface.

It also introduces four different attribute kinds: metadata, regular attributes, custom attributes, and system attributes.
The motivation of why these particular attribute kinds are selected to be included in the general interface is unclear.
Moreover, it is unclear why not all attribute kinds are mutable (not all have the corresponding methods in the Mutate trait). And even those that can be modified (attribute and metadata) have inconsistent interfaces:

  • set_attribute sets the attribute without any permission checks.
  • set_metadata sets the metadata using the who: AccountId parameter for a permission check.
  • set_metadata is a collection-less variant of set_item_metadata, while set_attribute has the same name in both trait sets.
  • In contrast to set_metadata, other methods (even the set_item_metadata!) that do the permission check use Option<AccountId> instead of AccountId.
  • The same goes for the corresponding clear_* methods.

This is all very confusing. I believe this confusion has already led to many inconsistencies in implementation and may one day lead to bugs.
For example, if you look at the implementation of v2 traits in pallet-nfts, you can see that attribute returns an attribute from CollectionOwner namespace or metadata, but set_attribute sets an attribute in Pallet namespace (i.e., it sets a system attribute!).

Future-proofing

Similar to how the pallet-nfts introduced new kinds of attributes, other NFT engines could also introduce different kinds of NFT operations. Or have sophisticated permission checks. Instead of bloating the general interface with the concrete use cases, I believe it would be great to make it granular and flexible, which this PR aspires to achieve. This way, we can preserve the consistency of the interface, make its implementation for an NFT engine more straightforward (since the NFT engine will implement only what it needs), and the pallets like pallet-nft-fractionalization that use NFT engines would work with more NFT engines, increasing the interoperability between NFT engines and other on-chain mechanisms.

New frame-support traits

The new asset_ops module is added to frame_support::traits::tokens.
It defines several "asset operations".

We avoid duplicating the interfaces with the same idea by providing the possibility to implement them on different structures representing different asset kinds. For example, similar operations can be performed on Collections and NFTs, such as creating Collections/NFTs, transferring their ownership, managing their metadata, etc.

The following "operations" are defined:

  • InspectMetadata
  • UpdateMetadata
  • Create
  • Transfer
  • Destroy
  • Stash
  • Restore
Q&A: What do InspectMetadata and UpdateMetadata operations mean?

InspectMetadata is an interface meant to inspect any information about an asset. This information could be 1) attribute bytes, 2) a flag representing the asset's ability to be transferred, or 3) any other "feature" of the asset.

The UpdateMetadata is the corresponding interface for updating this information.

The alternative names for them are InspectFeature and UpdateFeature.

Q&A: What do Stash/Restore operations mean?

This can be considered a variant of "Locking," but I decided to call it "Stash" because the actual "lock" operation is represented by the CanTransfer metadata strategy. "Stash" implies losing ownership of the token to the chain itself. The symmetrical "Restore" operation may restore the token to any location, not just the before-stash owner. It depends on the particular chain business logic.

Each operation can be implemented multiple times using different strategies associated with this operation.

This PR provides the implementation of the new traits for pallet-uniques.

Usage example: pallet-nft-fractionalization

In this in-fork draft PR, you can check out how these new traits are used in the pallet-nft-fractionalization.

A generic example: operations and strategies

Let's illustrate how we can implement the new traits for an NFT engine.

Imagine we have an NftEngine pallet (or a Smart Contract accessible from Rust; it doesn't matter), and we need to expose the following to other on-chain mechanisms:

  • Collection "from-to" transfer and a transfer without a check.
  • The similar transfers for NFTs
  • NFT force-transfers
  • A flag representing the ability of a collection to be transferred
  • The same flag for NFTs
  • NFT byte data
  • NFT attributes like in the pallet-uniques (byte data under a byte key)

Here is how this will look:

pub struct Collection<PalletInstance>(PhantomData<PalletInstance>);
pub struct Token<PalletInstance>(PhantomData<PalletInstance>);

impl AssetDefinition for Collection<NftEngine> { type Id = /* the collection ID type */; }
impl AssetDefinition for Token<NftEngine> { type Id = /* the *full* NFT ID type */; }

// --- Collection operations ---

// The collection transfer without checks 
impl Transfer<JustDo<AccountId>> for Collection<NftEngine> {
	fn transfer(class_id: &Self::Id, strategy: JustDo<AccountId>) -> DispatchResult {
		let JustDo(dest) = strategy;

		todo!("use NftEngine internals to perform the collection transfer")
	}
}

// The collection "from-to" transfer
impl Transfer<FromTo<AccountId>> for Collection<NftEngine> {
	fn transfer(class_id: &Self::Id, strategy: FromTo<AccountId>) -> DispatchResult {
		let FromTo(from, to) = strategy;
		
		todo!("check if `from` is the current owner");
		
		// Reuse the previous impl
		Self::transfer(class_id, JustDo(to))
	}
}

// A flag representing the ability of a collection to be transferred
impl InspectMetadata<CanTransfer> for Collection<NftEngine> {
	fn inspect_metadata(
		class_id: &Self::Id,
		_can_transfer: CanTransfer,
	) -> Result<bool, DispatchError> {
		todo!("use NftEngine internals to learn if the collection can be transferred")
	}
}

// --- NFT operations ---

// The NFT transfer implementation is similar in structure.

// The NFT transfer without checks
impl Transfer<JustDo<AccountId>> for Token<NftEngine> {
	fn transfer(instance_id: &Self::Id, strategy: JustDo<AccountId>) -> DispatchResult {
		let JustDo(dest) = strategy;

		todo!("use NftEngine internals to perform the NFT transfer")
	}
}

// The NFT "from-to" transfer
impl Transfer<FromTo<AccountId>> for Token<NftEngine> {
	fn transfer(instance_id: &Self::Id, strategy: FromTo<AccountId>) -> DispatchResult {
		let FromTo(from, to) = strategy;

		todo!("check if `from` is the current owner");

		// Reuse the previous impl
		Self::transfer(instance_id, JustDo(to))
	}
}

// There are meta-strategies like WithOrigin, which carries an Origin and any internal strategy.
// It abstracts origin checks for any possible operation.
// For example, we can do this to implement NFT force-transfers
impl Transfer<WithOrigin<RuntimeOrigin, JustDo<AccountId>>> for Token<NftEngine> {
	fn transfer(
		instance_id: &Self::Id,
		strategy: WithOrigin<RuntimeOrigin, JustDo<AccountId>>,
	) -> DispatchResult {
		let WithOrigin(origin, just_do) = strategy;

		ensure_root(origin)?;
		Self::transfer(instance_id, just_do)
	}
}

// A flag representing the ability of an NFT to be transferred
impl InspectMetadata<CanTransfer> for Token<NftEngine> {
	fn inspect_metadata(
		instance_id: &Self::Id,
		_can_transfer: CanTransfer,
	) -> Result<bool, DispatchError> {
		todo!("use NftEngine internals to learn if the NFT can be transferred")
	}
}

// The NFT bytes (notice that we have a different return type because of the "Bytes" strategy).
impl InspectMetadata<Bytes> for Token<NftEngine> {
	fn inspect_metadata(
		instance_id: &Self::Id,
		_bytes: Bytes,
	) -> Result<Vec<u8>, DispatchError> {
		todo!("use NftEngine internals to get the NFT bytes")
	}
}

// Some strategies like Bytes and CanTransfer are generic so that they can have different "parameters".
// We can add a custom byte flavor called "Attribute" to make the attribute logic for NFTs. Its parameter carries the key.
// Note: in this PR, pallet-uniques provides the Attribute flavor: https://github.com/UniqueNetwork/polkadot-sdk/blob/45855287b8647f34a4b3015facc714232c2ebe3e/substrate/frame/uniques/src/types.rs#L136
// For self-containment, let's declare the pallet-uniques' `Attribute` here.
pub struct Attribute<'a>(pub &'a [u8]);

// The NFT attributes implementation
impl<'a> InspectMetadata<Bytes<Attribute<'a>>> for Token<NftEngine> {
	fn inspect_metadata(
		instance_id: &Self::Id,
		strategy: Bytes<Attribute>,
	) -> Result<Vec<u8>, DispatchError> {
		let Bytes(Attribute(attribute_key)) = strategy;

		todo!("use NftEngine internals to get the attribute bytes")
	}
}

For further examples, see how pallet-uniques implements these operations for collections and items.

Footnotes

  1. Don't confuse NonFungibleAdapter (collection-less) and NonFungiblesAdapter (in-collection; see "s" in the name).

@franciscoaguirre
Copy link
Contributor

Thank you for tackling the issue of generic, granular traits for non-fungible tokens!

I have a few proposals I think would improve these traits:

  1. Change Strategy to Ticket and create a method called can_transfer in the Transfer trait itself.

We use this pattern quite a lot of first validating that an action can be done and then doing it.
By using Rust's type system, we can make sure the action can only be done if the validation was called beforehand.
We do this with an associated type we usually call Ticket.

trait Transfer {
  type Ticket;
  fn can_transfer(id: _) -> Self::Ticket;
  fn transfer(id: _, ticket: Self::Ticket);
}

You can see this pattern in the SendXcm trait, for delivering messages.

This Ticket can contain any information needed for the action. Seeing the "strategies" you defined, it looks like they could all be accomplished by setting from type Ticket = AccountId to type Ticket = (Origin, AccountId, AccountId).
Using an associated type makes it only possible to implement this trait once. You could use a generic, but I don't see the use-case of implementing multiple strategies.

  1. Have methods in InspectMetadata (which could also be called Inspect) for common attributes and another for generic ones.
trait Inspect {
  type Key;
  type Value;
  fn owner(id: _) -> Option<Self::AccountId>;
  // ...other common attributes we expect every NFT engine could have...
  fn attribute(id: _, key: Self::Key) -> Result<Value>;
}

This could allow an NFT engine to define a schema of attributes it supports by setting Key as an enum.
The Value is there since we could have very simple or very complex return values depending on the NFT engine and the key.

What do you think?

@mrshiposha
Copy link
Contributor Author

@franciscoaguirre Thanks for the feedback!

About the Inspect trait

I initially considered similar things about the Inspect trait but moved away in the design process.
Indeed, having an associated Key type is okay, and it could be an enum. However, there is an issue with the Value type.
For instance, imagine we want NFTs (or similar unique objects) to have not only byte data but also some useful data that can interact with the rest of the chain (for example, other pallets can do something to this data). For instance, we could host a CanUpdate bool flag representing the given user's ability to update the NFT data. So, the Key will be the following enum.

enum NftKey<'a, AccountId> {
    Bytes {
        // Suppose there are different byte values under different keys
        // as in pallet-uniques or pallet-nfts
        key: &'a [u8]
    },
    CanUpdate(AccountId),
}

What type of return value should the Inspect::attribute have in this case?
We'd want to get a Vec<u8> when we passed the NftKey::Bytes, and we'd want bool when we passed the NftKey::CanUpdate.
However, we must pick one type. Surely, we could return just Vec<u8> and encode bool into it for the CanUpdate case, but I find it inherently cumbersome to use and, if misused, might lead to bugs (imagine if we have more enum variants).

The current implementation in the PR provides a way to get a suitable return type for the given input type: the Bytes strategy has Value = Vec<u8>, and CanUpdateMetadata has Value = bool.

The analogous reasoning applies to the UpdateMetadata trait. We need suitable input data types for each key variant: new_data: Option<&[u8]> for Bytes and new_flag: bool for CanUpdate. The current implementation does that.

These are the main reasons why it is implemented this way.

Also, I believe it is nice when pallets require the bare minimum in their configs. The current design allows pallets to require only the needed "features" of NFT-like objects.

About the Transfer trait

I see that using the SendXcm-like pattern, we can decouple the can_transfer check from the actual transfer operation.
I agree that this might be desirable, at least in some situations.

However, what parameters should can_transfer accept in this design? In your example, it takes just an id and returns a Ticket. Assuming Ticket = (Origin, AccountId, AccountId), how can the can_transfer produce these values? Aren't these values supposed to be the input parameters to the check?

By the way, if you want a design where some parameters are checked first, then you get a ticket value and pass the ticket into the operation, you can do that even with the current design (traits' names are subject to change):

pub struct Ticket(/* some data for the transfer */);
impl TransferStrategy for Ticket { type Success = (); }

pub struct TransferTicketRequest<TransferParams>(pub TransferParams);
impl<TP> MetadataInspectStrategy for TransferTicketRequest<TP> {
    type Value = Ticket;
}

impl<TP> InspectMetadata<Instance, TransferTicketRequest<TP>> for NftEngine {
    fn inspect_metadata(
        id: &Self::Id,
        ticket_request: TransferTicketRequest<TP>,
    ) -> Result<Ticket, DispatchError> {
        let TransferTicketRequest(params) = ticket_request;
        let ticket = todo!("examine the `params` and decide if we should create the ticket");

        Ok(ticket)
    }
}

impl Transfer<Instance, Ticket> for NftEngine {
    fn transfer(
        id: &Self::Id,
        ticket: Ticket
    ) -> DispatchResult {
        todo!("examine the ticket, if ok perform the transfer")
    }
}

About the multiple strategies:

I don't see the use-case of implementing multiple strategies.

What if you need to perform different types of checks? For instance, in the example above, the Ticket contains /* some data for the transfer */. But what data? Some places might require something like FromTo, others JustDo (i.e., an unchecked operation), and maybe even custom checks with sophisticated parameters. Multiple strategies allow for providing a more granular interface, and pallet configs can require exactly what they need.

@mrshiposha
Copy link
Contributor Author

mrshiposha commented Oct 15, 2024

@franciscoaguirre, a follow-up to my previous comment.

There is room for simplification. Although I believe multiple strategies are a good thing (as per the reasons provided in the previous comment), it seems there is no need for a notion of "asset kinds." The AssetKind generic parameter tells what asset kind a given operation handles, but this shouldn't concern the user of a trait.

For example, if an XCM adapter or a pallet's config requires Transfer<FromTo<AccountId>>, it shouldn't care what asset kind will be transferred. The only thing that matters is how the asset is identified and what parameters are required to perform the transfer.
This way, if a pallet config requires some operation with the specified parameters, any asset implementing the needed interface will be acceptable, be it an NFT, a collection, or any other unique thing on-chain.

In this design, the implementor of the traits "defines" an asset kind. For example:

// Assume we have an NFT pallet `Pallet<T: Config, I: 'static>`

pub struct Collection<PalletInstance>(PhantomData<PalletInstance>);
impl<T: Config<I>, I: 'static> AssetDefinition for Collection<Pallet<T, I>> {
    type Id = /* the collection ID type */;
}

// Collection "from-to" transfer
// Note that there is NO `AssetKind` parameter
impl<T: Config<I>, I: 'static> Transfer<FromTo<AccountId>> for Collection<Pallet<T, I>> {
	fn transfer(collection_id: &Self::Id, strategy: FromTo<AccountId>) -> DispatchResult {
		let FromTo(from, to) = strategy;

		todo!("check if `from` is the current owner using Pallet<T, I>");

		// Reuse `Transfer<JustDo<AccountId>>` impl (it is assumed in this example)
		Self::transfer(collection_id, JustDo(to))
	}
}

pub struct Nft<PalletInstance>(PhantomData<PalletInstance>);
impl<T: Config<I>, I: 'static> AssetDefinition for Nft<Pallet<T, I>> {
    type Id = /* the *full* NFT ID type */;
}

// The NFT "from-to" transfer
// Note that there is NO `AssetKind` parameter
impl<T: Config<I>, I: 'static> Transfer<FromTo<AccountId>> for Nft<Pallet<T, I>> {
	fn transfer(nft_id: &Self::Id, strategy: FromTo<AccountId>) -> DispatchResult {
		let FromTo(from, to) = strategy;

		todo!("check if `from` is the current owner using Pallet<T, I>");

		// Reuse `Transfer<JustDo<AccountId>>` impl (it is assumed in this example)
		Self::transfer(nft_id, JustDo(to))
	}
}

@mrshiposha
Copy link
Contributor Author

mrshiposha commented Oct 30, 2024

@franciscoaguirre I simplified the traits. Only one generic parameter is used by operations.
The PR's description and pallet-nft-fractionalization example are updated accordingly.

Copy link
Contributor

@xlc xlc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Should have some tests as usage examples and verify implementation correctness

@mrshiposha
Copy link
Contributor Author

The asset-ops tests added into pallet-uniques

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants