From 3302fe7131beffb6d26e12e225b1fc6d2d941233 Mon Sep 17 00:00:00 2001 From: adpthegreat Date: Thu, 26 Sep 2024 19:26:07 +0100 Subject: [PATCH] - Followed guidelines in contributing.md - Refactored it to use newest dependencies - Replaced @metaplex- foundation/js with @metaplex-foundation-umi - Replaced old methods with their corresponding umi methods - Replaced all old code snippets - Gave clearer variable names - Removed redundant code - Ran prettier --- .../state-compression/compressed-nfts.md | 1142 ++++++----------- 1 file changed, 410 insertions(+), 732 deletions(-) diff --git a/content/courses/state-compression/compressed-nfts.md b/content/courses/state-compression/compressed-nfts.md index 0b6241d9c..b5b9d5158 100644 --- a/content/courses/state-compression/compressed-nfts.md +++ b/content/courses/state-compression/compressed-nfts.md @@ -20,36 +20,43 @@ description: - Supporting RPC providers **index** cNFT data offchain when the cNFT is minted so that you can use the **Read API** to access the data. - The **Metaplex Bubblegum program** is an abstraction on top of the **State - Compression** program that enables you to create, mint, and manage - cNFT collections. + Compression** program that enables you to create, mint, and manage cNFT + collections. ## Lesson Compressed NFTs (cNFTs) are exactly what their name suggests: NFTs whose structure takes up less account storage than traditional NFTs. Compressed NFTs -use a technique called [**State Compression**](https://solana.com/docs/advanced/state-compression) to store data in a way that -drastically reduces costs. +use a technique called +[**State Compression**](https://solana.com/docs/advanced/state-compression) to +store data in a way that drastically reduces costs. -Solana’s transaction costs are so low that most users don't think about the cost of minting NFTs at scale. However, minting 1 million -NFTs can cost around 24,000 SOL. In contrast, cNFTs can be -set up and minted for 10 SOL or less, meaning costs can be reduced -by over 1000x by using cNFTs. +Solana’s transaction costs are so low that most users don't think about the cost +of minting NFTs at scale. However, minting 1 million NFTs can cost around 24,000 +SOL. In contrast, cNFTs can be set up and minted for 10 SOL or less, meaning +costs can be reduced by over 1000x by using cNFTs. -While cNFTs are much cheaper, they can be more complex to work with. Over time, tools will evolve to make using cNFTs as easy as traditional NFTs. +While cNFTs are much cheaper, they can be more complex to work with. Over time, +tools will evolve to make using cNFTs as easy as traditional NFTs. -But for now, you’ll still need to understand the technical details, so let’s dive in! +But for now, you’ll still need to understand the technical details, so let’s +dive in! ### A theoretical overview of cNFTs Most of the costs of traditional NFTs comes from the need for account storage -space. Compressed NFTs (cNFTs) use State Compression to store data in -the blockchain’s cheaper **ledger state**, using the more expensive account storage space -only to store a “fingerprint”, or **hash**, of the data. This hash allows you to -cryptographically verify that data has not been altered. - -To store and verify these hashes , we use a special binary tree -structure known as a [**Concurrent Merkle Tree**](https://developers.metaplex.com/bubblegum/concurrent-merkle-trees). This tree structure combines data through hashing in a determininistic way, eventually creating a single smaller hash called a "root hash" that's stored onchain, this process ccompresses the data, hence the “compression.” The steps to this process -are: +space. Compressed NFTs (cNFTs) use State Compression to store data in the +blockchain’s cheaper **ledger state**, using the more expensive account storage +space only to store a “fingerprint”, or **hash**, of the data. This hash allows +you to cryptographically verify that data has not been altered. + +To store and verify these hashes , we use a special binary tree structure known +as a +[**Concurrent Merkle Tree**](https://developers.metaplex.com/bubblegum/concurrent-merkle-trees). +This tree structure combines data through hashing in a determininistic way, +eventually creating a single smaller hash called a "root hash" that's stored +onchain, this process ccompresses the data, hence the “compression.” The steps +to this process are: 1. Take any piece of data 2. Create a hash of the data @@ -64,30 +71,32 @@ are: truth” can go through the same process and compare the final hash without having to store all the data onchain -One challenge in the process above is how to make data available if its not directly stored in an account. Since this hashing happens onchain, all the -data exists in the ledger state and could theoretically be retrieved from the -original transaction by replaying the entire chain state from the origin. However, -it’s easier (though still complicated) to have an **indexer** -track and index this data as the transactions occur. This ensures there is an -offchain “cache” of the data that anyone can access and subsequently verify -against the onchain root hash. +One challenge in the process above is how to make data available if its not +directly stored in an account. Since this hashing happens onchain, all the data +exists in the ledger state and could theoretically be retrieved from the +original transaction by replaying the entire chain state from the origin. +However, it’s easier (though still complicated) to have an **indexer** track and +index this data as the transactions occur. This ensures there is an offchain +“cache” of the data that anyone can access and subsequently verify against the +onchain root hash. This process is _complicated_. We’ll cover some of the key concepts below but -don’t worry if you don’t understand it right away. We’ll cover more theory in the -state compression lesson and focus on applying it to NFTs. By the end of this lesson, you’ll be able to work with cNFTs even if you -don’t fully understand the whole state compression process. +don’t worry if you don’t understand it right away. We’ll cover more theory in +the state compression lesson and focus on applying it to NFTs. By the end of +this lesson, you’ll be able to work with cNFTs even if you don’t fully +understand the whole state compression process. #### Concurrent Merkle trees -A **Merkle tree** is a binary tree structure that represents data as a single hash. Every -leaf node in the structure is a hash of its inner data while every branch is a -hash of its child leaf hashes. In turn, branches are also hashed together until -eventually one final root hash remains. +A **Merkle tree** is a binary tree structure that represents data as a single +hash. Every leaf node in the structure is a hash of its inner data while every +branch is a hash of its child leaf hashes. In turn, branches are also hashed +together until eventually one final root hash remains. Any modification to leaf data changes the root hash. This causes a problem when -multiple transactions in the same slot try to update the tree at the same time. Since -these transactions must execute in series, all but the first will fail since the -root hash and proof passed in will have been invalidated by the first +multiple transactions in the same slot try to update the tree at the same time. +Since these transactions must execute in series, all but the first will fail +since the root hash and proof passed in will have been invalidated by the first transaction to be executed. A **Concurrent Merkle Tree** is a Merkle tree that stores a secure changelog of @@ -97,26 +106,29 @@ changelog can be used as a source of truth to allow for concurrent changes to be made to the tree. When working with a concurrent Merkle tree, there are three variables that -determine the size, the cost to create the tree, and how many number changes that can be made to the tree at once: +determine the size, the cost to create the tree, and how many number changes +that can be made to the tree at once: 1. Max depth 2. Max buffer size 3. Canopy depth -The **max depth** is the maximum number of layers or "hops" to get from any leaf to the root -of the tree. Since its a type of binary tree, every leaf is connected only -to one other leaf. Max depth can then logically be used to calculate the number -of nodes for the tree with `2 ^ maxDepth`. +The **max depth** is the maximum number of layers or "hops" to get from any leaf +to the root of the tree. Since its a type of binary tree, every leaf is +connected only to one other leaf. Max depth can then logically be used to +calculate the number of nodes for the tree with `2 ^ maxDepth`. The **max buffer size** is effectively the maximum number of concurrent changes that you can make to a tree within a single slot with the root hash still being valid. -The **canopy depth** is the number of proof nodes that are stored onchain for verification. To verify a leaf you need the complete proof path for -the tree. The complete proof path is made up of one proof node for every “layer” -of the tree, i.e. a max depth of 14 means there are 14 proof nodes. The larger the tree, the more proof nodes there are, and each -node adds 32 bytes to a transaction, which can quickly exceed the -maximum transaction size limit , so caching proof nodes onchain helps manaage this. +The **canopy depth** is the number of proof nodes that are stored onchain for +verification. To verify a leaf you need the complete proof path for the tree. +The complete proof path is made up of one proof node for every “layer” of the +tree, i.e. a max depth of 14 means there are 14 proof nodes. The larger the +tree, the more proof nodes there are, and each node adds 32 bytes to a +transaction, which can quickly exceed the maximum transaction size limit , so +caching proof nodes onchain helps manaage this. Each of these three values, max depth, max buffer size, and canopy depth, comes with a tradeoff. Increasing the value of any of these values increases the size @@ -133,68 +145,71 @@ concurrent writes do you need. #### SPL State Compression and Noop Programs -The SPL State Compression Program simplifies and standardizes the process of using merkle trees across the Solana ecosystem. It provides key functionalities for -initializing Merkle trees, managing tree leafs (i.e. add, update, remove data), -and verifying leaf data. +The SPL State Compression Program simplifies and standardizes the process of +using merkle trees across the Solana ecosystem. It provides key functionalities +for initializing Merkle trees, managing tree leafs (i.e. add, update, remove +data), and verifying leaf data. -The State Compression Program also leverages a separate “no op” (No Operation) program whose -primary purpose log leaf data to the ledger state making it easier to index leaf data. +The State Compression Program also leverages a separate “No op” (No Operation) +program whose primary purpose log leaf data to the ledger state making it easier +to index leaf data. #### Use the Ledger State for storage -The Solana ledger is a continuous record of signed transactions, -theoretically traceable all the way back to the genesis block. This means any -data that has ever been put into a transaction exists in the ledger. +The Solana ledger is a continuous record of signed transactions, theoretically +traceable all the way back to the genesis block. This means any data that has +ever been put into a transaction exists in the ledger. When you want to store compressed data, you pass it to the State Compression -program where it gets hashed and emitted as an “event” to the no op program. The +program where it gets hashed and emitted as an “event” to the Noop program. The hash is then stored in the corresponding concurrent Merkle tree. Since the data -passed through a transaction and even exists on the no op program logs, it will +passed through a transaction and even exists on the Noop program logs, it will exist on the ledger state permanently. #### Index data for easy lookup -Normally, you would access onchain data by fetching it from an account. -However, when using state compression, its a bit more complicated . +Normally, you would access onchain data by fetching it from an account. However, +when using state compression, its a bit more complicated . -Instead of being stored in an account, compressed data resides in the ledger state. -The easiest way to access the full data is through the logs of the no op +Instead of being stored in an account, compressed data resides in the ledger +state. The easiest way to access the full data is through the logs of the Noop instruction, but while this data will in a sense exist in the ledger state forever, it will likely be inaccessible through validators after a certain period of time. To save space and be more performant, validators don’t retain every transaction back to the genesis block. The specific amount of time you’ll be able to access -the no op instruction logs related to your data will vary based on the validator, +the Noop instruction logs related to your data will vary based on the validator, but eventually you’ll lose access to it if you’re relying directly on instruction logs. -Technically, it is possible to replay the entire transaction history back to the genesis block but -this is impractical and unperformant for most teams. -Instead, a better approach is using an indexer that will observe the events sent to the no op -program and store the relevant data offchain. That way you don’t need to worry -about old data becoming inaccessible. +Technically, it is possible to replay the entire transaction history back to the +genesis block but this is impractical and unperformant for most teams. Instead, +a better approach is using an indexer that will observe the events sent to the +Noop program and store the relevant data offchain. That way you don’t need to +worry about old data becoming inaccessible. ### Create a cNFT Collection -Now that we've covered the theory, let’s focus on the -main point of this lesson: how to create a cNFT collection. +Now that we've covered the theory, let’s focus on the main point of this lesson: +how to create a cNFT collection. Fortunately, you can use tools created by Solana Foundation, the Solana developer community, and Metaplex to simplify the process. Specifically, we’ll be using the `@solana/spl-account-compression` SDK, the Metaplex Bubblegum program, and the Bubblegum program’s corresponding typescript SDK -`@metaplex-foundation/mpl-bugglegum`. +`@metaplex-foundation/mpl-bugglegum` alongside the Metaplex umi library +`@metaplex-foundation/umi`. #### Prepare metadata -Before starting, you’ll need to prepare your NFT metadata similar to how you would with a Candy Machine. An NFT is simply a token with -metadata that follows the NFT standard. In other words, heres an example of how it should look like: +Before starting, you’ll need to prepare your NFT metadata similar to how you +would with a Candy Machine. An NFT is simply a token with metadata that follows +the NFT standard. In other words, heres an example of how it should look like: ```json { @@ -220,37 +235,91 @@ metadata that follows the NFT standard. In other words, heres an example of how } ``` -Depending on your project, you may be able to generate this metadata dynamically or you have a separate JSON file prepared for each cNFT . You’ll also -need any other assets referenced by the JSON, such as the `image` url shown in -the example above. +Depending on your project, you may be able to generate this metadata dynamically +or you have a separate JSON file prepared for each cNFT . You’ll also need any +other assets referenced by the JSON, such as the `image` url shown in the +example above. + +#### Setting up Umi + +Before we start creating Collection NFTs we have to setup Umi. Umi is a modular +framework for building and using JavaScript clients for Solana onchain programs +that was created by Metaplex. Note that Umi provides distinct implementations +for many components compared to web3.js, such as Keypairs, PublicKeys, and +Connections, but converting from web3.js versions to Umi equivalents is simple. +To begin, we first need to initialize an Umi instance. + +```typescript +import { createUmi } from "@metaplex-foundation/umi-bundle-defaults"; +import { clusterApiUrl } from "@solana/web3.js"; + +const umi = createUmi(clusterApiUrl("devnet")); +``` + +Next we have to attach a signer to our Umi instance + +```typescript +import { clusterApiUrl } from "@solana/web3.js"; +import { createTree, mplBubblegum } from "@metaplex-foundation/mpl-bubblegum"; +import { dasApi } from "@metaplex-foundation/digital-asset-standard-api"; +import { keypairIdentity } from "@metaplex-foundation/umi"; +import { createUmi } from "@metaplex-foundation/umi-bundle-defaults"; +import { getOrCreateKeypair } from "./utils"; + +const umi = createUmi(clusterApiUrl("devnet")); + +//get keypair from .env file or create a new one +const wallet = await getOrCreateKeypair("Wallet1"); + +// convert to Umi compatible keypair +const umiKeypair = umi.eddsa.createKeypairFromSecretKey(wallet.secretKey); + +// Load the DAS API and MPL Bubblegum plugins into Umi, and set the Umi identity using a keypair, which acts as the signer for transactions. +umi.use(keypairIdentity(umiKeypair)).use(mplBubblegum()).use(dasApi()); +``` #### Create Collection NFT If you want your cNFTs to be part of a collection, you’ll need to create a -Collection NFT **before** you start minting cNFTs. A Collection NFT is a traditional NFT -that serves as the reference binding your cNFTs together into a single collection. -To create a Collection NFT you can use the `@metaplex-foundation/js` library. Just make -sure you set `isCollection` field to `true`. +Collection NFT **before** you start minting cNFTs. A Collection NFT is a +traditional NFT that serves as the reference binding your cNFTs together into a +single collection. To create a Collection NFT we will use the `createNft` method +from the `@metaplex-foundation/mpl-token-metadata` library. Just make sure you +set `isCollection` field to `true`. You can find the documentation for the +createNft method on the "Minting Assets" page, in the "Create helpers" section +at +[Metaplex token-metadata documentation](https://developers.metaplex.com/token-metadata/mint). + +In the code below, `generateSigner(umi)` is used to create a new keypair (or +signer) that represents the mint address for the Collection NFT. This signer is +a unique address that will serve as the mint for your Collection NFT. It ensures +that each Collection NFT has a distinct mint address. ```typescript -const collectionNft = await metaplex.nfts().create({ - uri: someUri, +import { percentAmount, generateSigner } from "@metaplex-foundation/umi"; +import { createNft } from "@metaplex-foundation/mpl-token-metadata"; + +const collectionMint = generateSigner(umi); + +await createNft(umi, { + mint: collectionMint, name: "Collection NFT", - sellerFeeBasisPoints: 0, - updateAuthority: somePublicKey, - mintAuthority: somePublicKey, - tokenStandard: 0, + uri: randomUri, + authority: umi.identity, + updateAuthority: umi.identity.publicKey, + sellerFeeBasisPoints: percentAmount(0), symbol: "Collection", isMutable: true, isCollection: true, -}); +}).sendAndConfirm(umi, { send: { commitment: "finalized" } }); ``` #### Create Merkle tree Account -When creating compressed NFTs (cNFTs), you need to setup an an account for the Concurrent Merkle Tree. This Merkle tree account belongs to the -SPL State Compression program. Before you can do any cNFT related actions, you -need to create an empty Merkle tree account with the appropriate size. +When creating compressed NFTs (cNFTs), you need to setup an an account for the +Concurrent Merkle Tree. This Merkle tree account belongs to the SPL State +Compression program. Before you can do any cNFT related actions, you need to +create an empty Merkle tree account with the appropriate size. The variables impacting the size of the account are: @@ -258,9 +327,9 @@ The variables impacting the size of the account are: 2. Max buffer size 3. Canopy depth -The Max depth and Max buffer size must be selected from an existing set of valid pairs. The -table below shows the valid pairs along with the number of cNFTs that can be -created with those values. +The Max depth and Max buffer size must be selected from an existing set of valid +pairs. The table below shows the valid pairs along with the number of cNFTs that +can be created with those values. | Max Depth | Max Buffer Size | Max Number of cNFTs | | --------- | --------------- | ------------------- | @@ -311,210 +380,108 @@ bidding system for your cNFTs. The canopy effectively caches proof nodes onchain so you don’t have to pass all of them into the transaction, allowing for more complex transactions. -Increasing any of these three values increases the size of the account, which also -increasing the cost associated with creating it. Weigh the benefits accordingly -when choosing the values. +Increasing any of these three values increases the size of the account, which +also increasing the cost associated with creating it. Weigh the benefits +accordingly when choosing the values. -Once you know these values, you can use the `createAllocTreeIx` helper function -from the `@solana/spl-account-compression` TS SDK to create the instruction for -creating the Merkle Tree account. +Once you know these values, you can use the `createTree` method from the +@metaplex-foundation/mpl-bubblegum package to create your tree. ```typescript -import { createAllocTreeIx } from "@solana/spl-account-compression" - -// Generate a treeKeypair for the Merkle tree -const treeKeypair = Keypair.generate() - -const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - { maxDepth: 20; maxBufferSize: 256 }, - canopyDepth -) -``` - -Note that createAllocTreeIx is simply a helper function for calculating the size of the Merkle tree -account and creating the instruction to send to the System Program to -allocating space for it. This function doesn’t interact with any -compression-specific programs (like actually adding data to the tree). - -#### Use Bubblegum to Initialize Your Tree +import { createTree } from "@metaplex-foundation/mpl-bubblegum"; +import { + ValidDepthSizePair, +} from "@solana/spl-account-compression" -With the empty tree account created, the next step is to initialize the tree using the Bubblegum program. This process involves creating a -tree config account which allows the Bubblegum Program to add cNFT-specific tracking and functionality. + const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, + }; -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides the -helper function `createCreateTreeInstruction` for calling the `create_tree` -instruction on the Bubblegum program. As part of the call, you’ll need to derive -the `treeAuthority` PDA expected by the program. This PDA uses the tree’s -address as a seed. + const merkleTree = generateSigner(umi); -```typescript -import { - createAllocTreeIx, - SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - SPL_NOOP_PROGRAM_ID, -} from "@solana/spl-account-compression" -import { - PROGRAM_ID as BUBBLEGUM_PROGRAM_ID, - createCreateTreeInstruction, -} from "@metaplex-foundation/mpl-bubblegum" - -... -//Derive the treeAuthority PDA from the tree's public key -const [treeAuthority, _bump] = PublicKey.findProgramAddressSync( - [treeKeypair.publicKey.toBuffer()], - BUBBLEGUM_PROGRAM_ID -) -//Create the instruction to initialize the tree using Bubblegum -const createTreeInstruction = createCreateTreeInstruction( - { - treeAuthority, //Derived PDA for treeAuthority - merkleTree: treeKeypair.publicKey,// Public key of the Merkle Tree - payer: payer.publicKey, //Payer for transaction fees - treeCreator: payer.publicKey,// the address that will be listed as the tree creator - logWrapper: SPL_NOOP_PROGRAM_ID,// no op program for exposing data to indexers - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, // Compression program ID - }, - { - maxBufferSize: 256, // Max buffer size - maxDepth: 20, // Max depth of the tree + const builder = await createTree(umi, { + merkleTree, + maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree + maxBufferSize: maxDepthSizePair.maxBufferSize // Max buffer size, public: false, // Set to false to restrict minting to the tree creator/delegate - }, - BUBBLEGUM_PROGRAM_ID -) + }); + ``` -The list below shows the required input for this helper function: - -- `accounts` - An object representing the accounts required by the instruction. - This includes: - - `treeAuthority` - Bubblegum expects this to be a PDA derived using the - Merkle tree address as a seed - - `merkleTree` - The Merkle tree account - - `payer` - The address paying for transaction fees, rent, etc. - - `treeCreator` - The address to list as the tree creator - - `logWrapper` - The program to use to expose the data to indexers through - logs; this should be the address of the SPL Noop program unless you have - some other custom implementation - - `compressionProgram` - The compression program to use for initializing the - Merkle tree; this should be the address of the SPL State Compression program - unless you have some other custom implementation -- `args` - An object representing additional arguments required by the - instruction. This includes: - - `maxBufferSize` - The max buffer size of the Merkle tree - - `maxDepth` - The max depth of the Merkle tree - - `public` - When set to `true`, anyone will be able to mint cNFTs from the - tree; when set to `false`, only the tree creator or tree delegate will be - able to min cNFTs from the tree +In the code above, we define an object of type `ValidDepthSizePair` from the +`@solana/spl-account-compression` program, setting maxDepth to 3 and +maxBufferSize to 8 to define valid liimits for the Merkle tree. We then generate +a merkleTree signer using `generateSigner` with the umi instance, after we've +done this, we now invoke `createTree`, passing the umi instance, the merkleTree +signer, and the parameters from maxDepthSizePair to configure the tree's maximum +depth and buffer size, then we set the public parameter to false, restricting +minting to the tree's creator or delegate , doing so is optional. When submitted, this will invoke the `create_tree` instruction on the Bubblegum -program. This instruction does three things: +program under the hood. This instruction does three things: 1. Creates the tree config PDA account -2. Initializes the tree config account with appropriate initial values +2. Initializes the `TreeConfig` account with appropriate initial values that + holds additional data exclusive to compressed NFTs such as the tree creator, + whether the tree is public. 3. Issues a CPI to the State Compression program to initialize the empty Merkle tree account -Feel free to take a look at the program code -[here](https://github.com/metaplex-foundation/mpl-bubblegum/blob/main/programs/bubblegum/program/src/lib.rs#L887). +Feel free to take a look at the source code for the create_tree instruction and +the TreeConfig account -#### Mint cNFTs +- [Create Tree](https://github.com/metaplex-foundation/mpl-bubblegum/blob/df5b5feae8c161a7e22b9878a3b30a62f92ee864/programs/bubblegum/program/src/processor/create_tree.rs#L40) -Now that we have the Merkle tree account and its corresponding Bubblegum tree config account -initialized, its time to mint cNFTs to the tree, we have the choice of using either `mint_v1` or `mint_to_collection_v1` Bubblegum instructions, depending on whether -you want to the minted cNFT to be part of a collection. +- [Tree Config](https://github.com/metaplex-foundation/mpl-bubblegum/blob/42ffed35da6b2a673efacd63030a360eac3ae64e/programs/bubblegum/program/src/state/mod.rs#L17) -Version 0.7 of the `@metaplex-foundation/mpl-bubblegum` Typescript SDK provides helper -functions `createMintV1Instruction` and `createMintToCollectionV1Instruction` to -make it easier for you to create the instructions. +#### Mint cNFTs + +Now that we have the Merkle tree account and its corresponding Bubblegum tree +config account initialized, its time to mint cNFTs to the tree, we use `mintV1` +or `mintToCollectionV1` from the `@metaplex-foundation/mpl-bubblegum` package, +depending on whether we want the minted cNFT to be part of a collection. -Both functions will require you to pass in the NFT metadata and a list of -accounts required to mint the cNFT. Below is an example of minting to a -collection: +1. mintV1 ```typescript -const mintWithCollectionInstruction = createMintToCollectionV1Instruction( - { - payer: payer.publicKey, - merkleTree: treeAddress, - treeAuthority, - treeDelegate: payer.publicKey, - leafOwner: destination, - leafDelegate: destination, - collectionAuthority: payer.publicKey, - collectionAuthorityRecordPda: BUBBLEGUM_PROGRAM_ID, - collectionMint: collectionDetails.mint, - collectionMetadata: collectionDetails.metadata, - editionAccount: collectionDetails.masterEditionAccount, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - bubblegumSigner, - tokenMetadataProgram: TOKEN_METADATA_PROGRAM_ID, - }, - { - metadataArgs: Object.assign(nftMetadata, { - collection: { key: collectionDetails.mint, verified: false }, - }), +await mintV1(umi, { + leafOwner, + merkleTree, + metadata: { + name: "My Compressed NFT", + uri: "https://example.com/my-cnft.json", + sellerFeeBasisPoints: 0, // 0% + collection: none(), + creators: [ + { address: umi.identity.publicKey, verified: false, share: 100 }, + ], }, -); +}).sendAndConfirm(umi); ``` -Notice that there are two arguments for the helper function: `accounts` and -`args`. The `args` parameter is simply the NFT metadata, while `accounts` is an -object listing the accounts required by the instruction. There are admittedly a -lot of them: - -- `payer` - the account that will pay for the transaction fees, rent, etc. -- `merkleTree` - the Merkle tree account -- `treeAuthority` - the tree authority; should be the same PDA you derived - previously -- `treeDelegate` - the tree delegate; this is usually the same as the tree - creator -- `leafOwner` - the desired owner of the compressed NFT being minted -- `leafDelegate` - the desired delegate of the compressed NFT being minted; this - is usually the same as the leaf owner -- `collectionAuthority` - the authority of the collection NFT -- `collectionAuthorityRecordPda` - optional collection authority record PDA; - there typically is none, in which case you should put the Bubblegum program - address -- `collectionMint` - the mint account for the collection NFT -- `collectionMetadata` - the metadata account for the collection NFT -- `editionAccount` - the master edition account of the collection NFT -- `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other custom - implementation -- `logWrapper` - the program to use to expose the data to indexers through logs; - this should be the address of the SPL Noop program unless you have some other - custom implementation -- `bubblegumSigner` - a PDA used by the Bubblegrum program to handle collection - verification -- `tokenMetadataProgram` - the token metadata program that was used for the - collection NFT; this is usually always the Metaplex Token Metadata program - -Minting without a collection requires fewer accounts, none of which are -exclusive to minting without a collection. You can take a look at the example -below. - ```typescript -const mintWithoutCollectionInstruction = createMintV1Instruction( - { - payer: payer.publicKey, - merkleTree: treeAddress, - treeAuthority, - treeDelegate: payer.publicKey, - leafOwner: destination, - leafDelegate: destination, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - }, - { - message: nftMetadata, +await mintToCollectionV1(umi, { + leafOwner, + merkleTree, + collectionMint, + metadata: { + name: "My Compressed NFT", + uri: "https://example.com/my-cnft.json", + sellerFeeBasisPoints: 0, // 0% + collection: { key: collectionMint, verified: false }, + creators: [ + { address: umi.identity.publicKey, verified: false, share: 100 }, + ], }, -); +}).sendAndConfirm(umi); ``` +Both functions require you to pass the NFT metadata and the accounts required to +mint the NFT `leafOwner`, `merkleTree`, but the mintToCollectionV1 requires an +addition collectionMint account which is the mint address of the Collection NFT +to which the cNFT will be part. + ### Interact with cNFTs It’s important to note that cNFTs _are not_ SPL tokens. That means your code @@ -529,12 +496,15 @@ The simplest way to fetch data from an existing cNFT is to use the Read API, you’ll need to use a supporting RPC Provider. Metaplex maintains a (likely non-exhaustive) [list of RPC providers](https://developers.metaplex.com/bubblegum/rpcs) that -support the Read API. In this lesson we’ll be using +support the Read API. + +In this lesson we’ll be using [Helius](https://docs.helius.dev/compression-and-das-api/digital-asset-standard-das-api) as they have free support for Devnet. To fetch a specific cNFT using the Read API you need to derive the cNFT’s asset -ID, from the leaf index (which you track). However, after minting cNFTs you’ll have at most two pieces of information: +ID, from the leaf index (which you track). However, after minting cNFTs you’ll +have at most two pieces of information: 1. The transaction signature 2. The leaf index (possibly) @@ -548,8 +518,8 @@ index. This is a reasonable assumption for most mints given that the minting will be controlled by your code and can be set up sequentially so that your code can -track which index is going to be used for each mint. that is, the first mint will -use index 0, the second index 1, etc. +track which index is going to be used for each mint. that is, the first mint +will use index 0, the second index 1, etc. Once you have the leaf index, you can derive the cNFT’s corresponding asset ID. When using Bubblegum, the asset ID is a PDA derived using the Bubblegum program @@ -564,32 +534,28 @@ happen and stores the cNFT metadata that was hashed and stored in the Merkle tree. This enables them to surface that data when requested. This asset id is what the indexer uses to identify the particular asset. -Heres how you can fetch the cNFT using the `getLeafAssetId` helper function from the -Bubblegum SDK, and the `getAsset` method provided by your RPC provider: +Heres how you can fetch the cNFT using the `findLeafAssetIdPda` helper function +from the Bubblegum SDK, and the `getAsset` method provided by your RPC provider: ```typescript -const assetId = await getLeafAssetId(treeAddress, new BN(leafIndex)); -const response = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAsset", - params: { - id: assetId, - }, - }), +const assetId = findLeafAssetIdPda(umi, { + merkleTree, + leafIndex: leaf.nonce, }); +``` + +Now that you have the Asset ID , you can now fetch the cNFT, we will simply use +the `getAsset` method provided by the supporting RPC provider and the dasApi +library: -const { result } = await response.json(); -console.log(JSON.stringify(result, null, 2)); +```typescript +const rpcAsset = await umi.rpc.getAsset(assetId); ``` -After fetching the cNFT using the getAsset RPC method, the returned JSON object will contain metadata that is comprehensive of what a traditional NFT’s -on and offchain metadata would look like combined. For example, you can find -the cNFT attributes at `content.metadata.attributes` or the image at -`content.files.uri`. +After fetching the cNFT using the getAsset RPC method, the returned JSON object +will contain metadata that is comprehensive of what a traditional NFT’s on and +offchain metadata would look like combined. For example, you can find the cNFT +attributes at `content.metadata.attributes` or the image at `content.files.uri`. #### Query cNFTs @@ -614,18 +580,15 @@ to learn how to use them correctly. Just as with a standard SPL token transfer, security is important. An SPL token transfer, however, makes verifying transfer authority very easy. It’s built into the SPL Token program and standard signing. A compressed token’s ownership is -harder to verify. The actual verification will happen program-side, but -your client-side code needs to provide additional information to make it -possible. - -While there is a Bubblegum `createTransferInstruction` helper function, there is -more assembly required than usual. Specifically, the Bubblegum program needs to -verify that the entirety of the cNFT’s data is what the client asserts before a -transfer can occur. The entirety of the cNFT data has been hashed and stored as -a single leaf on the Merkle tree, and the Merkle tree is simply a hash of all -the tree’s leafs and branches. Because of this, you can’t simply tell the -program what account to look at and have it compare that account’s `authority` -or `owner` field to the transaction signer. +harder to verify. The actual verification will happen program-side, but your +client-side code needs to provide additional information to make it possible. + +The Bubblegum program needs to verify that the entirety of the cNFT’s data is +what the client asserts before a transfer can occur. The entirety of the cNFT +data has been hashed and stored as a single leaf on the Merkle tree, and the +Merkle tree is simply a hash of all the tree’s leafs and branches. Because of +this, you can’t simply tell the program what account to look at and have it +compare that account’s `authority` or `owner` field to the transaction signer. Instead, you need to provide the entirety of the cNFT data and any of the Merkle tree’s proof information that isn’t stored in the canopy. That way, the program @@ -660,118 +623,33 @@ const assetDataResponse = await fetch(process.env.RPC_URL, { }); const assetData = (await assetDataResponse.json()).result; -const assetProofResponse = await fetch(process.env.RPC_URL, { - method: "POST", - headers: { "Content-Type": "application/json" }, - body: JSON.stringify({ - jsonrpc: "2.0", - id: "my-id", - method: "getAssetProof", - params: { - id: assetId, - }, - }), -}); -const assetProof = (await assetProofResponse.json()).result; -``` - -The third step is to fetch the Merkle tree account. We do this by using the `ConcurrentMerkleTreeAccount` type from -`@solana/spl-account-compression`: - -```typescript -const treePublicKey = new PublicKey(assetData.compression.tree); +import { + getAssetWithProof, + transfer, +} from "@metaplex-foundation/mpl-bubblegum"; -const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( - connection, - treePublicKey, -); +const assetWithProof = await getAssetWithProof(umi, assetId); ``` -Step four is the most complex step. Using the three pieces of -information gathered, you’ll need to assemble the proof path for the cNFT’s -corresponding leaf. The proof path is represented as accounts passed to the -program instruction. The program uses each of the account addresses as proof -nodes to prove the leaf data is what you say it is. +You can use the transfer method from the `@metaplex-foundation/mpl-bubblegum` +package. The method requires two arguments: the `umi` instance and an object +containing the following fields: -The full proof is provided by the indexer as shown above in `assetProof`. -However, you can exclude the same number of tail-end accounts from the proof as -the depth of the canopy. +- `assetWithProof` - Data representing the asset and its associated Merkle + proof. +- `leafOwner` - the owner of the leaf (cNFT) in question +- `newLeafOwner` - the address of the new owner after the transfer -```typescript -const canopyDepth = treeAccount.getCanopyDepth() || 0; - -const proofPath: AccountMeta[] = assetProof.proof - .map((node: string) => ({ - pubkey: new PublicKey(node), - isSigner: false, - isWritable: false, - })) - .slice(0, assetProof.proof.length - canopyDepth); -``` - -Finally, you can assemble the transfer instruction. The instruction helper -function, `createTransferInstruction`, requires the following arguments: - -- `accounts` - a list of instruction accounts, as expected; they are as follows: - - `merkleTree` - the Merkle tree account - - `treeAuthority` - the Merkle tree authority - - `leafOwner` - the owner of the leaf (cNFT) in question - - `leafDelegate` - the delegate of the leaf (cNFT) in question; if no delegate - has been added then this should be the same as `leafOwner` - - `newLeafOwner` - the address of the new owner post-transfer - - `logWrapper` - the program to use to expose the data to indexers through - logs; this should be the address of the SPL Noop program unless you have - some other custom implementation - - `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other - custom implementation - - `anchorRemainingAccounts` - this is where you add the proof path -- `args` - additional arguments required by the instruction; they are: - - `root` - the root Merkle tree node from the asset proof; this is provided by - the indexer as a string and must be converted to bytes first - - `dataHash` - the hash of the asset data retrieved from the indexer; this is - provided by the indexer as a string and must be converted to bytes first - - `creatorHash` - the hash of the cNFT creator as retrieved from the indexer; - this is provided by the indexer as a string and must be converted to bytes - first - - `nonce` - used to ensure that no two leafs have the same hash; this value - should be the same as `index` - - `index` - the index where the cNFT’s leaf is located on the Merkle tree - -An example of this is shown below. Note that the first 3 lines of code grab -additional information nested in the objects shown previously so they are ready -to go when assembling the instruction itself. +const leafOwner = new PublicKey(assetData.ownership.owner); ```typescript -const treeAuthority = treeAccount.getAuthority(); -const leafOwner = new PublicKey(assetData.ownership.owner); -const leafDelegate = assetData.ownership.delegate - ? new PublicKey(assetData.ownership.delegate) - : leafOwner; - -const transferInstruction = createTransferInstruction( - { - merkleTree: treePublicKey, - treeAuthority, - leafOwner, - leafDelegate, - newLeafOwner: receiver, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - anchorRemainingAccounts: proofPath, - }, - { - root: [...new PublicKey(assetProof.root.trim()).toBytes()], - dataHash: [ - ...new PublicKey(assetData.compression.data_hash.trim()).toBytes(), - ], - creatorHash: [ - ...new PublicKey(assetData.compression.creator_hash.trim()).toBytes(), - ], - nonce: assetData.compression.leaf_id, - index: assetData.compression.leaf_id, - }, -); +import { transfer } from "@metaplex-foundation/mpl-bubblegum"; + +await transfer(umi, { + ...assetWithProof, + leafOwner: currentLeafOwner, + newLeafOwner: newLeafOwner.publicKey, +}).sendAndConfirm(umi); ``` ### Conclusion @@ -784,7 +662,8 @@ functionality, take a look at the [Bubblegum client source code](https://github.com/metaplex-foundation/mpl-bubblegum/tree/main/clients/js-solita) and leverage the helper functions it provides. -Keep in mind that compression is still new. Available tooling will improve quickly but the principles you’ve learned in this lesson will likely remain the +Keep in mind that compression is still new. Available tooling will improve +quickly but the principles you’ve learned in this lesson will likely remain the same. These principles can also be broadened to arbitrary state compression, so be sure to master them here so you’re ready for more fun stuff in future lessons! @@ -837,11 +716,10 @@ We will be writing all of our code in the `index.ts`. ### 2. Create the Merkle tree account -We’ll start by creating the Merkle tree account. Let’s wrap this in a -function that will eventually create _and_ initialize the account. We’ll put it -below our `main` function in `index.ts`. Let’s call it -`createAndInitializeTree`. For this function to work, it will need the following -parameters: +We’ll start by creating the Merkle tree account. Let’s wrap this in a function +that will eventually create _and_ initialize the account. We’ll put it below in +our `index.ts`. Let’s call it `createAndInitializeTree`. For this function to +work, it will need the following parameters: - `connection` - a `Connection` to use for interacting with the network. - `payer` - a `Keypair` that will pay for transactions. @@ -849,10 +727,6 @@ parameters: `@solana/spl-account-compression`. It’s a simple object with properties `maxDepth` and `maxBufferSize` that enforces a valid combination of the two values. -- `canopyDepth` - a number for the canopy depth In the body of the function, - we’ll generate a new address for the tree, then create the instruction for - allocating a new Merkle tree account by calling `createAllocTreeIx` from - `@solana/spl-account-compression`. ```typescript async function createAndInitializeTree( @@ -862,18 +736,10 @@ async function createAndInitializeTree( canopyDepth: number, ) { const treeKeypair = Keypair.generate(); - - const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - maxDepthSizePair, - canopyDepth, - ); } ``` -### 3. Use Bubblegum to initialize the Merkle tree and create the tree config account +### 2. Use Bubblegum to initialize the Merkle tree and create the tree config account Now that the instruction for creating the tree is ready, we can create an instruction for invoking `create_tree` on the Bubblegum program. This will @@ -889,9 +755,6 @@ This instruction needs us to provide the following: - `payer` - the transaction fee payer - `treeCreator` - the address of the tree creator; we’ll make this the same as `payer` - - `logWrapper` - make this the `SPL_NOOP_PROGRAM_ID` - - `compressionProgram` - make this the `SPL_ACCOUNT_COMPRESSION_PROGRAM_ID` -- `args` - a list of instruction arguments; this includes: - `maxBufferSize` - the buffer size from our function’s `maxDepthSizePair` parameter - `maxDepth` - the max depth from our function’s `maxDepthSizePair` parameter @@ -904,106 +767,73 @@ transaction. Keep in mind that the transaction needs to be signed by both the ```typescript async function createAndInitializeTree( - connection: Connection, payer: Keypair, maxDepthSizePair: ValidDepthSizePair, - canopyDepth: number, ) { - const treeKeypair = Keypair.generate(); - - const allocTreeInstruction = await createAllocTreeIx( - connection, - treeKeypair.publicKey, - payer.publicKey, - maxDepthSizePair, - canopyDepth, - ); + const merkleTree = generateSigner(umi); - const [treeAuthority, _bump] = PublicKey.findProgramAddressSync( - [treeKeypair.publicKey.toBuffer()], - BUBBLEGUM_PROGRAM_ID, - ); + const builder = await createTree(umi, { + merkleTree, + maxDepth: maxDepthSizePair.maxDepth, // Max depth of the tree, + maxBufferSize: maxDepthSizePair.maxBufferSize, // Max buffer size, + public: false, // Set to false to restrict minting to the tree creator/delegate + }); - const createTreeInstruction = createCreateTreeInstruction( - { - treeAuthority, - merkleTree: treeKeypair.publicKey, - payer: payer.publicKey, - treeCreator: payer.publicKey, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - }, - { - maxBufferSize: maxDepthSizePair.maxBufferSize, - maxDepth: maxDepthSizePair.maxDepth, - public: false, - }, - ); + builder.sendAndConfirm(umi); - const transaction = new Transaction().add(allocTreeInstruction, createTreeInstruction); - tx.feePayer = payer.publicKey; + const merkleTreeAddress = merkleTree.publicKey; try { - const transactionSignature = await sendAndConfirmTransaction( - connection, - transaction, - [treeKeypair, payer], - { - commitment: "confirmed", - skipPreflight: true, - }, + const explorerLink = getExplorerLink( + "transaction", + merkleTreeAddress, + "devnet", ); - const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) + console.log(`Transaction submitted: ${explorerLink}`); + console.log("Tree Address:", merkleTreeAddress); - console.log("Tree Address:", treeKeypair.publicKey.toBase58()); - - return treeKeypair.publicKey; - } catch (err: any) { - console.error("\nFailed to create Merkle tree:", err); - throw err; + return merkleTreeAddress; + } catch (error: any) { + console.error("\nFailed to create merkle tree:", error); + throw error; } } ``` -To test what you have so far, call -`createAndInitializeTree` from `main` and provide small values for the max depth -and max buffer size. +To test what you have so far, call `createAndInitializeTree` and provide small +values for the max depth and max buffer size. ```typescript - function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); -} +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); ``` -Remember that Devnet SOL is limited so if you test too many times you -might run out of Devnet SOL before we get to minting. To test, in your terminal -run the following: +Remember that Devnet SOL is limited so if you test too many times you might run +out of Devnet SOL before we get to minting. To test, in your terminal run the +following: `npm run start` #### 4. Mint cNFTs to your tree -Believe it or not, that’s all it takes to set up your tree to compressed -NFTs! Now let’s focus on the minting process. +Believe it or not, that’s all it takes to set up your tree to compressed NFTs! +Now let’s focus on the minting process. First, let’s declare a function called `mintCompressedNftToCollection`. It will need the following parameters: @@ -1036,143 +866,57 @@ them real quick: - `payer` - the account that will pay for the transaction fees, rent, etc. - `merkleTree` - the Merkle tree account -- `treeAuthority` - the tree authority; should be the same PDA you derived - previously -- `treeDelegate` - the tree delegate; this is usually the same as the tree - creator -- `leafOwner` - the desired owner of the compressed NFT being minted -- `leafDelegate` - the desired delegate of the compressed NFT being minted; this - is usually the same as the leaf owner -- `collectionAuthority` - the authority of the collection NFT -- `collectionAuthorityRecordPda` - optional collection authority record PDA; - there typically is none, in which case you should put the Bubblegum program - address -- `collectionMint` - the mint account for the collection NFT -- `collectionMetadata` - the metadata account for the collection NFT -- `editionAccount` - the master edition account of the collection NFT -- `compressionProgram` - the compression program to use; this should be the - address of the SPL State Compression program unless you have some other custom - implementation -- `logWrapper` - the program to use to expose the data to indexers through logs; - this should be the address of the SPL Noop program unless you have some other - custom implementation -- `bubblegumSigner` - a PDA used by the Bubblegrum program to handle collection - verification -- `tokenMetadataProgram` - the token metadata program that was used for the - collection NFT; this is usually always the Metaplex Token Metadata program - -When you put it all together, this is what it’ll look like: +- `treeAuthority` - the tree authority; sh When you put it all together, this is + what it’ll look like: ```typescript - function mintCompressedNftToCollection( +function mintCompressedNftToCollection( connection: Connection, payer: Keypair, treeAddress: PublicKey, collectionDetails: CollectionDetails, amount: number, ) { - // Derive the tree authority PDA ('TreeConfig' account for the tree account) - const [treeAuthority] = PublicKey.findProgramAddressSync( - [treeAddress.toBuffer()], - BUBBLEGUM_PROGRAM_ID, - ); - // Derive the Bubblegum signer, used by the Bubblegum program to handle "collection verification" // Only used for `createMintToCollectionV1` instruction const [bubblegumSigner] = PublicKey.findProgramAddressSync( [Buffer.from("collection_cpi", "utf8")], BUBBLEGUM_PROGRAM_ID, ); - - for (let i = 0; i < amount; i++) { - // Compressed NFT Metadata - const compressedNFTMetadata = createNftMetadata(payer.publicKey, i); - - // Create the instruction to "mint" the compressed NFT to the tree - const mintInstruction = createMintToCollectionV1Instruction( - { - payer: payer.publicKey, // The account that will pay for the transaction - merkleTree: treeAddress, // The address of the tree account - treeAuthority, // The authority of the tree account, should be a PDA derived from the tree account address - treeDelegate: payer.publicKey, // The delegate of the tree account, should be the same as the tree creator by default - leafOwner: payer.publicKey, // The owner of the compressed NFT being minted to the tree - leafDelegate: payer.publicKey, // The delegate of the compressed NFT being minted to the tree - collectionAuthority: payer.publicKey, // The authority of the "collection" NFT - collectionAuthorityRecordPda: BUBBLEGUM_PROGRAM_ID, // Must be the Bubblegum program id - collectionMint: collectionDetails.mint, // The mint of the "collection" NFT - collectionMetadata: collectionDetails.metadata, // The metadata of the "collection" NFT - editionAccount: collectionDetails.masterEditionAccount, // The master edition of the "collection" NFT - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - logWrapper: SPL_NOOP_PROGRAM_ID, - bubblegumSigner, - tokenMetadataProgram: TOKEN_METADATA_PROGRAM_ID, - }, - { - metadataArgs: Object.assign(compressedNFTMetadata, { - collection: { key: collectionDetails.mint, verified: false }, - }), - }, - ); - - try { - // Create new transaction and add the instruction - const instruction = new Transaction().add(mintInstruction); - - // Set the fee payer for the transaction - transaction.feePayer = payer.publicKey; - - // Send the transaction - const transactionSignature = await sendAndConfirmTransaction( - connection, - instruction, - [payer], - { commitment: "confirmed", skipPreflight: true }, - ); - - const explorerLink = getExplorerLink("transaction",transactionSignature, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) - } catch (error) { - console.error("\nFailed to mint compressed NFT:", error); - throw error; - } - } } ``` -This is a great point to test with a small tree. Simply update `main` to call -`getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: +This is a great point to test with a small tree. Simply update `index.ts` to +call `getOrCreateCollectionNFT` then `mintCompressedNftToCollection`: ```typescript -function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); - const collectionNft = await getOrCreateCollectionNFT(connection, wallet); +const collectionNft = await getOrCreateCollectionNFT(connection, wallet); - await mintCompressedNftToCollection( - connection, - wallet, - treeAddress, - collectionNft, - 2 ** maxDepthSizePair.maxDepth, - ); -} +await mintCompressedNftToCollection( + connection, + wallet, + treeAddress, + collectionNft, + 2 ** maxDepthSizePair.maxDepth, +); ``` Again, to run, in your terminal type: `npm run start` @@ -1187,10 +931,10 @@ accurate but is useless in conveying what the information is. Let’s start by declaring a function `logNftDetails` that takes two parameters `treeAddress` and `nftsMinted`. -Since theres no direct identifier of any kind that points -to our cNFT, we need to know the leaf index that was used when -we minted our cNFT. We can then use that to derive the asset ID used by the Read -API and then use the Read API to fetch our cNFT data. +Since theres no direct identifier of any kind that points to our cNFT, we need +to know the leaf index that was used when we minted our cNFT. We can then use +that to derive the asset ID used by the Read API and then use the Read API to +fetch our cNFT data. In our case, we created a non-public tree and minted 8 cNFTs, so we know that the leaf indexes used were 0-7. With this, we can use the `getLeafAssetId` @@ -1235,15 +979,15 @@ function logNftDetails(treeAddress: PublicKey, nftsMinted: number) { } ``` -Helius monitors transaction logs in real time and stores the NFT -metadata that was hashed and stored in the Merkle tree. This enables them to -display that data when requested. +Helius monitors transaction logs in real time and stores the NFT metadata that +was hashed and stored in the Merkle tree. This enables them to display that data +when requested. If we add a call to this function at the end of `main` and re-run your script, -the data we get back in the console is very detailed. It includes all of -the data you’d expect in both the onchain and offchain portion of a traditional -NFT. You can find the cNFT’s attributes, files, ownership and creator -information, and more. +the data we get back in the console is very detailed. It includes all of the +data you’d expect in both the onchain and offchain portion of a traditional NFT. +You can find the cNFT’s attributes, files, ownership and creator information, +and more. ```json { @@ -1344,14 +1088,14 @@ to see what’s available. #### 6. Transfer a cNFT The last thing we’re going to add to our script is a cNFT transfer. Just as with -a standard SPL token transfer, security is important. Unlike regular standard SPL -token transfer, however, to build a secure transfer with state compression of -any kind, the program performing the transfer needs the entire asset data. +a standard SPL token transfer, security is important. Unlike regular standard +SPL token transfer, however, to build a secure transfer with state compression +of any kind, the program performing the transfer needs the entire asset data. The program, Bubblegum in this case, needs to be provided with the entire data that was hashed and stored on the corresponding leaf _and_ needs to be given the -“proof path” for the leaf in question. That makes cNFT transfers a bit more complicated -than SPL token transfers. +“proof path” for the leaf in question. That makes cNFT transfers a bit more +complicated than SPL token transfers. Remember, the general steps are: @@ -1372,7 +1116,7 @@ Inside that function, let’s fetch the asset data again then also fetch the ass proof. For good measure, let’s wrap everything in a `try catch`. ```typescript - function transferNft( +function transferNft( connection: Connection, assetId: PublicKey, sender: Keypair, @@ -1490,72 +1234,8 @@ function transferNft( }), }); const assetProof = (await assetProofResponse.json()).result; - - const treePublicKey = new PublicKey(assetData.compression.tree); - - const treeAccount = await ConcurrentMerkleTreeAccount.fromAccountAddress( - connection, - treePublicKey, - ); - - const canopyDepth = treeAccount.getCanopyDepth() || 0; - - const proofPath: AccountMeta[] = assetProof.proof - .map((node: string) => ({ - pubkey: new PublicKey(node), - isSigner: false, - isWritable: false, - })) - .slice(0, assetProof.proof.length - canopyDepth); - - const treeAuthority = treeAccount.getAuthority(); - const leafOwner = new PublicKey(assetData.ownership.owner); - const leafDelegate = assetData.ownership.delegate - ? new PublicKey(assetData.ownership.delegate) - : leafOwner; - - const transferInstruction = createTransferInstruction( - { - merkleTree: treePublicKey, - treeAuthority, - leafOwner, - leafDelegate, - newLeafOwner: receiver, - logWrapper: SPL_NOOP_PROGRAM_ID, - compressionProgram: SPL_ACCOUNT_COMPRESSION_PROGRAM_ID, - anchorRemainingAccounts: proofPath, - }, - { - root: [...new PublicKey(assetProof.root.trim()).toBytes()], - dataHash: [ - ...new PublicKey(assetData.compression.data_hash.trim()).toBytes(), - ], - creatorHash: [ - ...new PublicKey(assetData.compression.creator_hash.trim()).toBytes(), - ], - nonce: assetData.compression.leaf_id, - index: assetData.compression.leaf_id, - }, - ); - - const transaction = new Transaction().add(transferInstruction); - transaction.feePayer = sender.publicKey; - const txSignature = await sendAndConfirmTransaction( - connection, - transaction, - [sender], - { - commitment: "confirmed", - skipPreflight: true, - }, - ); - const explorerLink = getExplorerLink("transaction",transactionId, "devnet") - - console.log(`Transaction submitted: ${explorerLink}`) - } catch (err: any) { - console.error("\nFailed to transfer nft:", err); - throw err; } + } ``` @@ -1566,47 +1246,45 @@ entire collection using our function `logNftDetails`. You’ll note that the NFT at index zero will now belong to our new wallet in the `ownership` field. ```typescript -function main() { - const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); - const wallet = await getOrCreateKeypair("Wallet_1"); - await airdropSolIfNeeded(wallet.publicKey); +const connection = new Connection(clusterApiUrl("devnet"), "confirmed"); +const wallet = await getOrCreateKeypair("Wallet1"); +await airdropSolIfNeeded(wallet.publicKey); - const maxDepthSizePair: ValidDepthSizePair = { - maxDepth: 3, - maxBufferSize: 8, - }; +const maxDepthSizePair: ValidDepthSizePair = { + maxDepth: 3, + maxBufferSize: 8, +}; - const canopyDepth = 0; +const canopyDepth = 0; - const treeAddress = await createAndInitializeTree( - connection, - wallet, - maxDepthSizePair, - canopyDepth, - ); +const treeAddress = await createAndInitializeTree( + connection, + wallet, + maxDepthSizePair, + canopyDepth, +); - const collectionNft = await getOrCreateCollectionNFT(connection, wallet); +const collectionNft = await getOrCreateCollectionNFT(connection, wallet); - await mintCompressedNftToCollection( - connection, - wallet, - treeAddress, - collectionNft, - 2 ** maxDepthSizePair.maxDepth, - ); +await mintCompressedNftToCollection( + connection, + wallet, + treeAddress, + collectionNft, + 2 ** maxDepthSizePair.maxDepth, +); - const recieverWallet = await getOrCreateKeypair("Wallet_2"); - const assetId = await getLeafAssetId(treeAddress, new BN(0)); - await airdropSolIfNeeded(recieverWallet.publicKey); +const recieverWallet = await getOrCreateKeypair("Wallet2"); +const assetId = await getLeafAssetId(treeAddress, new BN(0)); +await airdropSolIfNeeded(recieverWallet.publicKey); - console.log( - `Transfering ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, - ); +console.log( + `Transfering ${assetId.toString()} from ${wallet.publicKey.toString()} to ${recieverWallet.publicKey.toString()}`, +); - await transferNft(connection, assetId, wallet, recieverWallet.publicKey); +await transferNft(connection, assetId, wallet, recieverWallet.publicKey); - await logNftDetails(treeAddress, 8); -} +await logNftDetails(treeAddress, 8); ``` Go ahead and run your script. The whole thing should execute without failing, @@ -1627,8 +1305,8 @@ take a look at the solution code on the `solution` branch of the ### Challenge -It’s your turn to apply these concepts on your own! We’re not going to -gove you detailed instructions at this point, but here are some ideas: +It’s your turn to apply these concepts on your own! We’re not going to give you +detailed instructions at this point, but here are some ideas: 1. Create your own production cNFT collection 2. Build a UI for this lesson’s lab that will let you mint a cNFT and display it