Home Backend Development Golang A Starknet transactions batcher

A Starknet transactions batcher

Dec 31, 2024 pm 05:54 PM

Abstract

This article presents the transactions batcher used in Metacube to send NFTs earned by players instantly. It explains the batcher's scalable actor-based architecture and provides a detailed implementation in Go.

All the code snippets are available in the associated GitHub repository.

Architecture

A Starknet transactions batcher

The Batcher is composed of two main actors:

  • The Builder receives the transactions, batches them into a single multicall transaction, and sends it to the Sender actor.
  • The Sender finalizes the transaction with appropriate fields (nonce, max fee, etc.), signs it, sends it to the Starknet network, and monitors its status.

This actor separation allows for a scalable and efficient batcher. The builder prepares the transactions while the sender sends them, allowing for a continuous and efficient flow of transactions.

Implementation

The following implementation is specific to Go, but the concepts can easily be adapted to other languages, as the functionalities remain the same.

Moreover, note that this implementation is specific to sending NFTs from the same contract. However, a more generic approach is mentioned later in the article.

Lastly, the code is based on the starknet.go library developed by Nethermind.

Batcher

Let's start with the Batcher itself:

type Batcher struct {
    accnt           *account.Account
    contractAddress *felt.Felt
    maxSize         int
    inChan          <-chan []string
    failChan        chan<- []string
}
Copy after login
Copy after login

The account (accnt) is the one holding the NFTs, it will be used to sign the transactions that transfer them. These NFTs are part of the same contract, hence the contractAddress field. The maxSize field is the maximum size of a batch, and inChan is the channel where the transactions are sent to the Batcher. The failChan is used to send back the transactions that failed to be sent.

Note that, in this implementation, the later-called transaction data ([]string) is an array of two elements: the recipient address and the NFT ID.

The Batcher runs both the Builder and the Sender actors concurrently:

type TxnDataPair struct {
    Txn  rpc.BroadcastInvokev1Txn
    Data [][]string
}

func (b *Batcher) Run() {
    txnDataPairChan := make(chan TxnDataPair)

    go b.runBuildActor(txnDataPairChan)
    go b.runSendActor(txnDataPairChan)
}
Copy after login
Copy after login

The defined channel txnDataPairChan sends the transaction data pairs from the Builder to the Sender. Each transaction data pair comprises the batch transaction, and the data for each transaction is embedded in it. The data for each transaction is sent with the batch transaction so that the failed transactions can be sent back to the entity that instantiates the Batcher.

Builder

Let's analyze the Build actor. Note that the code is simplified for better readability (full code):

type Batcher struct {
    accnt           *account.Account
    contractAddress *felt.Felt
    maxSize         int
    inChan          <-chan []string
    failChan        chan<- []string
}
Copy after login
Copy after login

The runBuildActor function is the Builder actor's event loop. It waits for transactions to be sent to the Batcher and builds a batch transaction when the batch is full, or a timeout is reached. The batch transaction is then sent to the Sender actor.

Sender

Let's now analyze the Sender actor. Note that the code is simplified for better readability (full code):

type TxnDataPair struct {
    Txn  rpc.BroadcastInvokev1Txn
    Data [][]string
}

func (b *Batcher) Run() {
    txnDataPairChan := make(chan TxnDataPair)

    go b.runBuildActor(txnDataPairChan)
    go b.runSendActor(txnDataPairChan)
}
Copy after login
Copy after login

The runSendActor function is the sender actor's event loop. It waits for the Builder to send batch transactions, signs them, sends them to the Starknet network, and monitors their status.

A note on fee estimation: one could estimate the fee cost of the batch transaction before sending it. The following code can be added after signing the transaction:

// This function builds a function call from the transaction data.
func (b *Batcher) buildFunctionCall(data []string) (*rpc.FunctionCall, error) {
    // Parse the recipient address
    toAddressInFelt, err := utils.HexToFelt(data[0])
    if err != nil {
        ...
    }

    // Parse the NFT ID
    nftID, err := strconv.Atoi(data[1])
    if err != nil {
        ...
    }

    // The entry point is a standard ERC721 function
    // https://docs.openzeppelin.com/contracts-cairo/0.20.0/erc721
    return &rpc.FunctionCall{
        ContractAddress: b.contractAddress,
        EntryPointSelector: utils.GetSelectorFromNameFelt(
            "safe_transfer_from",
        ),
        Calldata: []*felt.Felt{
            b.accnt.AccountAddress, // from
            toAddressInFelt, // to
            new(felt.Felt).SetUint64(uint64(nftID)), // NFT ID
            new(felt.Felt).SetUint64(0), // data -> None
            new(felt.Felt).SetUint64(0), // extra data -> None
        },
    }, nil
}

// This function builds the batch transaction from the function calls.
func (b *Batcher) buildBatchTransaction(functionCalls []rpc.FunctionCall) (rpc.BroadcastInvokev1Txn, error) {
    // Format the calldata (i.e., the function calls)
    calldata, err := b.accnt.FmtCalldata(functionCalls)
    if err != nil {
        ...
    }

    return rpc.BroadcastInvokev1Txn{
        InvokeTxnV1: rpc.InvokeTxnV1{
            MaxFee:        new(felt.Felt).SetUint64(MAX_FEE),
            Version:       rpc.TransactionV1,
            Nonce:         new(felt.Felt).SetUint64(0), // Will be set by the send actor
            Type:          rpc.TransactionType_Invoke,
            SenderAddress: b.accnt.AccountAddress,
            Calldata:      calldata,
        },
    }, nil
}

// Actual Build actor event loop
func (b *Batcher) runBuildActor(txnDataPairChan chan<- TxnDataPair) {
    size := 0
    functionCalls := make([]rpc.FunctionCall, 0, b.maxSize)
    currentData := make([][]string, 0, b.maxSize)

    for {
        // Boolean to trigger the batch building
        trigger := false

        select {
        // Receive new transaction data
        case data, ok := <-b.inChan:
            if !ok {
                ...
            }

            functionCall, err := b.buildFunctionCall(data)
            if err != nil {
                ...
            }

            functionCalls = append(functionCalls, *functionCall)
            size++
            currentData = append(currentData, data)

            if size >= b.maxSize {
                // The batch is full, trigger the building
                trigger = true
            }

        // We don't want a smaller batch to wait indefinitely to be full, so we set a timeout to trigger the building even if the batch is not full
        case <-time.After(WAITING_TIME):
            if size > 0 {
                trigger = true
            }
        }

        if trigger {
            builtTxn, err := b.buildBatchTransaction(functionCalls)
            if err != nil {
                ...
            } else {
                // Send the batch transaction to the Sender
                txnDataPairChan <- TxnDataPair{
                    Txn:  builtTxn,
                    Data: currentData,
                }
            }

            // Reset variables
            size = 0
            functionCalls = make([]rpc.FunctionCall, 0, b.maxSize)
            currentData = make([][]string, 0, b.maxSize)
        }
    }
}
Copy after login

This might be useful to ensure the fee is not too high before sending the transaction. If the estimated fee is higher than expected, one might also need to re-adjust the max fee field of the transaction if the estimated fee is higher than expected. But note that when any change is made to the transaction, it must be signed again!

However, note that you might get some issues when estimating fees if the throughput of transactions is quite high. This is because when a given transaction is just approved, there is a bit of delay in updating the account's nonce. Therefore, when estimating the fee for the next transaction, it might fail, thinking that the nonce is still the previous one. So, if you still want to estimate fees, then you might need to provide some sleep between each transaction to avoid such problems.

Towards a generic batcher

The batcher presented is specific to sending NFTs from the same contract. However, the architecture can easily be adapted to send any type of transaction.

First, the transaction data sent to the Batcher must be more generic and, therefore, contain more information. They must contain the contract address, the entry point selector, and the call data. The buildFunctionCall function must then be adapted to parse this information.

One could also go one step further by making the sender account generic. This would require more refactoring, as the transactions must be batched per sender account. However, it is feasible and would allow for a more versatile batcher.

However, remember that premature optimization is the root of all evil. Therefore, if you just need to send NFTs or a specific token such as ETH or STRK, the batcher presented is more than enough.

CLI tool

The repository code can be used as a CLI tool to send a bunch of NFTs by batch. The tool is easy to use, and you should be able to adapt it to your needs after reading this article. Please refer to the README for more information.

Conclusion

I hope that this article helped you to better understand how Metacube sends NFTs to its players. The batcher is a key infrastructure component, and we are happy to share it with the community. If you have any questions or feedback, feel free to comment or reach out to me. Thank you for reading!

The above is the detailed content of A Starknet transactions batcher. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Undress AI Tool

Undress AI Tool

Undress images for free

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

What are the vulnerabilities of Debian OpenSSL What are the vulnerabilities of Debian OpenSSL Apr 02, 2025 am 07:30 AM

OpenSSL, as an open source library widely used in secure communications, provides encryption algorithms, keys and certificate management functions. However, there are some known security vulnerabilities in its historical version, some of which are extremely harmful. This article will focus on common vulnerabilities and response measures for OpenSSL in Debian systems. DebianOpenSSL known vulnerabilities: OpenSSL has experienced several serious vulnerabilities, such as: Heart Bleeding Vulnerability (CVE-2014-0160): This vulnerability affects OpenSSL 1.0.1 to 1.0.1f and 1.0.2 to 1.0.2 beta versions. An attacker can use this vulnerability to unauthorized read sensitive information on the server, including encryption keys, etc.

How to specify the database associated with the model in Beego ORM? How to specify the database associated with the model in Beego ORM? Apr 02, 2025 pm 03:54 PM

Under the BeegoORM framework, how to specify the database associated with the model? Many Beego projects require multiple databases to be operated simultaneously. When using Beego...

Transforming from front-end to back-end development, is it more promising to learn Java or Golang? Transforming from front-end to back-end development, is it more promising to learn Java or Golang? Apr 02, 2025 am 09:12 AM

Backend learning path: The exploration journey from front-end to back-end As a back-end beginner who transforms from front-end development, you already have the foundation of nodejs,...

What should I do if the custom structure labels in GoLand are not displayed? What should I do if the custom structure labels in GoLand are not displayed? Apr 02, 2025 pm 05:09 PM

What should I do if the custom structure labels in GoLand are not displayed? When using GoLand for Go language development, many developers will encounter custom structure tags...

What libraries are used for floating point number operations in Go? What libraries are used for floating point number operations in Go? Apr 02, 2025 pm 02:06 PM

The library used for floating-point number operation in Go language introduces how to ensure the accuracy is...

What is the problem with Queue thread in Go's crawler Colly? What is the problem with Queue thread in Go's crawler Colly? Apr 02, 2025 pm 02:09 PM

Queue threading problem in Go crawler Colly explores the problem of using the Colly crawler library in Go language, developers often encounter problems with threads and request queues. �...

How to solve the user_id type conversion problem when using Redis Stream to implement message queues in Go language? How to solve the user_id type conversion problem when using Redis Stream to implement message queues in Go language? Apr 02, 2025 pm 04:54 PM

The problem of using RedisStream to implement message queues in Go language is using Go language and Redis...

How to configure MongoDB automatic expansion on Debian How to configure MongoDB automatic expansion on Debian Apr 02, 2025 am 07:36 AM

This article introduces how to configure MongoDB on Debian system to achieve automatic expansion. The main steps include setting up the MongoDB replica set and disk space monitoring. 1. MongoDB installation First, make sure that MongoDB is installed on the Debian system. Install using the following command: sudoaptupdatesudoaptinstall-ymongodb-org 2. Configuring MongoDB replica set MongoDB replica set ensures high availability and data redundancy, which is the basis for achieving automatic capacity expansion. Start MongoDB service: sudosystemctlstartmongodsudosys

See all articles