You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current approach to miner mempool processing is flawed: miners consider the raw fee paid for each transaction, rather than the fee rate of the transaction. This creates blocks which are not filled in an optimal or fair way.
In theory, Clarity smart contracts enable miners to come up with a static prediction of runtime costs for contracts (technically, a static worst-case analysis). With these predictions for runtime costs, miners would be able to pack a block such that the maximum total fee is assessed. This would create a more fair fee-market: transactions that occupy more of the block limit would need to pay more. However, implementing this static analysis is a large undertaking and requires a fair amount of research activity: it’s true that Clarity’s finite-ness makes this possible, but how exactly to create such an analyzer is kind of an open problem. Because blocking progress on this problem with such a large undertaking is less than ideal, in this proposal, I try to outline a path forward that would allow incremental progress on estimation while unblocking the other work required for the whole ecosystem.
The basic idea is to implement a code-blind estimator for the cost of contract-call transactions (and other kinds of transactions). This estimator would use computed costs during both block validation and block building, and store the results, such that a transaction like (contract-call contract-name func <args>) could be looked up and associated with a runtime cost, which the miner can then consider with respect to the transaction fee to decide on whether or not to try to include it in the block.
Note: This proposal is not about alterations to the block limit or increasing the accuracy of runtime cost assessment. Those changes are consensus-breaking, and are in consideration for Stacks 2.1. Instead, this proposal is about a non-consensus breaking project to implement some helpful transaction fee market tools for miners and transaction generators (libraries, wallets, etc.).
Work Components
Given this approach, several components are needed to implement an end-to-end system for better fee estimation and miner block assembly.
Cost Estimator: this component would be updated with (StacksTransaction, ExecutionCost) pairs, and should answer queries of StacksTransaction with an estimated ExecutionCost.
Block assembly changes: the block builder (and microblock builder) would need to be changed to use this estimated ExecutionCost during block assembly. Approaches here can vary in sophistication, from a heuristic fee rate to a 5-dimensional knapsack algorithm.
Fee estimation endpoint changes: the current fee estimation endpoint can be updated to provide a best estimate for the fee rate of a transfer transaction. However, clients who are doing contract-calls would need to be updated to obtain an estimate of the ExecutionCost for their transaction. A new endpoint could provide both the fee rate estimate and execution cost estimate.
Mempool admission changes: nodes can use the estimation scheme to reject transactions that pay far too low of a fee for the transaction.
Note that these components do not need to be released at the same time. However, if not released at the same time, they should be released in the order above: clients should not update their fee estimation until block assembly changes have been released. Otherwise, they’re just wasting transaction fees, which could be upsetting to users.
Cost Estimator
The cost estimator component is the most open-ended part of this project, but it is also the most separable -- the estimator does not depend on much else in the stacks-node, just the ExecutionCost struct, the TransactionPayload struct, and (perhaps) the result value for the transaction (an estimator should probably ignore transactions that return an error -- even runtime errors, because those paths would have such different costs than would be expected usually).
For many transactions, the arguments to the function are fixed-size, which means that the worst-case costs of the transaction are fixed, so the estimator shouldn’t need to consider the input arguments when estimating the execution cost. For transactions with list, buffer, or string arguments, their ExecutionCost may depend on the input size, and it may do so in a non-linear way. Because of that, for these kinds of transactions, estimators probably will need to consider this when estimating the execution cost.
Integrating the cost estimator into the stacks-node will have two primary touch points: block assembly and block validation. During block validation, the cost estimator will receive new information about transactions and their observed runtimes. Similarly, for block assembly, the cost estimator will not only be consulted for transaction inclusion, but also updated with new information about transactions and their observed runtimes when the miner includes a transaction.
Block Assembly
The current BlockBuilder and microblock assembly structures will need to be updated to use the estimated ExecutionCost during block assembly. Depending on the approach taken, this could be either a very large or relatively minor change to the block assembly logic. The least invasive approaches involve assigning a single scalar score to each transaction indicating the fee rate of the transaction, and continuing to assemble blocks using an otherwise similar logic to today (just sorting by the scalar score when considering transactions for inclusion).
Approaches to determining a scalar score for transaction can vary in sophistication, but a naive strategy would be something like the absolute fee associated with the transaction divided by the transaction “volume”. The transaction volume would be equal to:
That is, the transaction “volume” is the product of each cost dimension, scaled by that dimension’s limit. An alternative would be to just take the maximum dimension (this is a more pessimistic block packing strategy-- punishing transactions that use most of the block’s limit in any given dimension).
Fee Estimation Endpoint
The fee estimation endpoint needs to be updated to provide a true fee rate estimate for transfer operations. Fee rate estimation differs from ExecutionCost estimation and the kind of estimation done during block assembly in that fee rate estimation looks at the observed fee rates in the last block produced (and potentially the estimated fee rates in the mempool) and produces an estimated fee rate for inclusion in the next block.
Once a fee rate estimate is obtained, communicating this to clients also requires some changes:
The current fee rate endpoint is really only suitable for transfer transactions. This endpoint can be updated to be useful for transfer transactions, and provide a better estimate for other transactions than it currently does. However, this endpoint should probably be marked deprecated.
A new endpoint could accept a partial transaction and use that to return the estimated execution cost and estimated fee for the transaction.
Strategy two would require updating dependent libraries: stacks.js first, and then any integrators would need to update their scripts, integrations, etc. Wallets like the web wallet and desktop wallet would also need to be updated (though they may just need to update their stacks.js version).
Mempool Admission
The final element of this project would be to alleviate some of the pressure on the network and mempool by incorporating this estimation into the mempool admission. Given a fee estimator and execution cost estimation, this component would be fairly simple -- if a transaction pushed to the mempool is much lower than the expected fee rate, it would simply be rejected by the mempool admitter. This component would be lower priority than most of the rest of this project, because it has lower impact, but it also would need to be implemented after downstream clients of the fee estimation endpoint had a sufficiently long enough period of time to update.
Testing
There are two ways to test the cost estimator and block assembly changes. First, integration testing will need to construct a test mempool with multiple contract instantiations, with an expected ordering.
Second, a mock miner should be instantiated with these changes. The mock miner should be updated to output the would have been assembled block -- basically the txids that it would have included in each block. These blocks can then be measured to ensure that they have higher fee rates than the current mining system. Importantly, we’d want to ensure that the assembly isn’t discarding any transactions that should still be considered (all mempool transactions should be considered as long as the mempool isn’t “full”).
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Overview
The current approach to miner mempool processing is flawed: miners consider the raw fee paid for each transaction, rather than the fee rate of the transaction. This creates blocks which are not filled in an optimal or fair way.
In theory, Clarity smart contracts enable miners to come up with a static prediction of runtime costs for contracts (technically, a static worst-case analysis). With these predictions for runtime costs, miners would be able to pack a block such that the maximum total fee is assessed. This would create a more fair fee-market: transactions that occupy more of the block limit would need to pay more. However, implementing this static analysis is a large undertaking and requires a fair amount of research activity: it’s true that Clarity’s finite-ness makes this possible, but how exactly to create such an analyzer is kind of an open problem. Because blocking progress on this problem with such a large undertaking is less than ideal, in this proposal, I try to outline a path forward that would allow incremental progress on estimation while unblocking the other work required for the whole ecosystem.
The basic idea is to implement a code-blind estimator for the cost of contract-call transactions (and other kinds of transactions). This estimator would use computed costs during both block validation and block building, and store the results, such that a transaction like
(contract-call contract-name func <args>)
could be looked up and associated with a runtime cost, which the miner can then consider with respect to the transaction fee to decide on whether or not to try to include it in the block.Note: This proposal is not about alterations to the block limit or increasing the accuracy of runtime cost assessment. Those changes are consensus-breaking, and are in consideration for Stacks 2.1. Instead, this proposal is about a non-consensus breaking project to implement some helpful transaction fee market tools for miners and transaction generators (libraries, wallets, etc.).
Work Components
Given this approach, several components are needed to implement an end-to-end system for better fee estimation and miner block assembly.
(StacksTransaction, ExecutionCost)
pairs, and should answer queries ofStacksTransaction
with an estimatedExecutionCost
.ExecutionCost
for their transaction. A new endpoint could provide both the fee rate estimate and execution cost estimate.Note that these components do not need to be released at the same time. However, if not released at the same time, they should be released in the order above: clients should not update their fee estimation until block assembly changes have been released. Otherwise, they’re just wasting transaction fees, which could be upsetting to users.
Cost Estimator
The cost estimator component is the most open-ended part of this project, but it is also the most separable -- the estimator does not depend on much else in the stacks-node, just the
ExecutionCost
struct, theTransactionPayload
struct, and (perhaps) the result value for the transaction (an estimator should probably ignore transactions that return an error -- even runtime errors, because those paths would have such different costs than would be expected usually).For many transactions, the arguments to the function are fixed-size, which means that the worst-case costs of the transaction are fixed, so the estimator shouldn’t need to consider the input arguments when estimating the execution cost. For transactions with list, buffer, or string arguments, their
ExecutionCost
may depend on the input size, and it may do so in a non-linear way. Because of that, for these kinds of transactions, estimators probably will need to consider this when estimating the execution cost.Integrating the cost estimator into the stacks-node will have two primary touch points: block assembly and block validation. During block validation, the cost estimator will receive new information about transactions and their observed runtimes. Similarly, for block assembly, the cost estimator will not only be consulted for transaction inclusion, but also updated with new information about transactions and their observed runtimes when the miner includes a transaction.
Block Assembly
The current BlockBuilder and microblock assembly structures will need to be updated to use the estimated ExecutionCost during block assembly. Depending on the approach taken, this could be either a very large or relatively minor change to the block assembly logic. The least invasive approaches involve assigning a single scalar score to each transaction indicating the fee rate of the transaction, and continuing to assemble blocks using an otherwise similar logic to today (just sorting by the scalar score when considering transactions for inclusion).
Approaches to determining a scalar score for transaction can vary in sophistication, but a naive strategy would be something like the absolute fee associated with the transaction divided by the transaction “volume”. The transaction volume would be equal to:
That is, the transaction “volume” is the product of each cost dimension, scaled by that dimension’s limit. An alternative would be to just take the maximum dimension (this is a more pessimistic block packing strategy-- punishing transactions that use most of the block’s limit in any given dimension).
Fee Estimation Endpoint
The fee estimation endpoint needs to be updated to provide a true fee rate estimate for transfer operations. Fee rate estimation differs from
ExecutionCost
estimation and the kind of estimation done during block assembly in that fee rate estimation looks at the observed fee rates in the last block produced (and potentially the estimated fee rates in the mempool) and produces an estimated fee rate for inclusion in the next block.Once a fee rate estimate is obtained, communicating this to clients also requires some changes:
Strategy two would require updating dependent libraries:
stacks.js
first, and then any integrators would need to update their scripts, integrations, etc. Wallets like the web wallet and desktop wallet would also need to be updated (though they may just need to update their stacks.js version).Mempool Admission
The final element of this project would be to alleviate some of the pressure on the network and mempool by incorporating this estimation into the mempool admission. Given a fee estimator and execution cost estimation, this component would be fairly simple -- if a transaction pushed to the mempool is much lower than the expected fee rate, it would simply be rejected by the mempool admitter. This component would be lower priority than most of the rest of this project, because it has lower impact, but it also would need to be implemented after downstream clients of the fee estimation endpoint had a sufficiently long enough period of time to update.
Testing
There are two ways to test the cost estimator and block assembly changes. First, integration testing will need to construct a test mempool with multiple contract instantiations, with an expected ordering.
Second, a mock miner should be instantiated with these changes. The mock miner should be updated to output the would have been assembled block -- basically the txids that it would have included in each block. These blocks can then be measured to ensure that they have higher fee rates than the current mining system. Importantly, we’d want to ensure that the assembly isn’t discarding any transactions that should still be considered (all mempool transactions should be considered as long as the mempool isn’t “full”).
Task Breakdown and Time Estimation
FeeEstimator
interface and default naive implementation #2835: medium-large task. Estimated: 1-2 weeksBeta Was this translation helpful? Give feedback.
All reactions