Fee Markets for Parallel Transaction Execution #1816
aaronbuchwald
started this conversation in
Chat
Replies: 1 comment
-
One simple idea is to force the builder to make the block into N segments that are "conflict free", which can be run in parallel, presumably to reduce the total execution time of blocks (on average). This means sometimes some blocks are more empty if all the transactions conflict. This approach only works with knowing state keys |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Parallel Transaction Execution
Introducing parallel transaction execution while maintaining an upper bound on worst case performance introduces some complexity. If we remove every other bottleneck except transaction execution, we'll bottleneck on single core performance unless we can introduce some level of parallelism.
Single Core Performance Limits
For a simple native transfer action benchmark, we can see that it takes ~1µ:
A naive EVM transfer action benchmark is about 30x slower (note: there's likely room to optimize the shim layer here):
This implies an upper bound of ~1m TPS (native action) or 30k TPS (general purpose/EVM) for single core execution (on my laptop).
Naive Solution
A naive solution assuming there are no other bottlenecks is to simply allocate N cores and increase our throughput limit. But, this fails rather miserably if we look at the worst case performance.
To get a serialized execution (produce an equivalent result to serially executing transactions), transactions touching the same state must fall back to executing serially ie. if tx1 writes a value that tx2 reads, then tx2 has to wait for tx1 to finish writing that value. In other words, our worst case execution has not improved at all by introducing parallel execution.
The naive solution improves performance in the average case with some degree of parallelism, but does nothing to improve our worst case performance.
Solution: Scheduler / Fee Market
To leverage parallel transaction execution, we need to introduce a scheduler and/or fee market that supports parallel execution and limits the maximum depth of dependent computation within a block. Our ideal solution should maintain an upper bound on worst case execution latency, while leveraging as much parallelism as the workload allows.
The simplest solution is probably to leave scheduling to the block builder and require that schedule maintains an upper bound on the expected computation time given a set number of cores.
Transactions declare the exact CPU units (could be expected microseconds of computation) and the block builder must produce a schedule that limits the expected execution time to 100ms.
When peers verify the block, they can either verify the schedule in advance or as they go and exit and mark the block as invalid if the schedule fails that requirement.
This design has the added advantage of leaving it to economically incentivized block builders to produce an efficient schedule and leaves room for scheduling improvements that do not necessarily require a hard fork.
Note: Solana does something similar, but their exact design and assumptions about whether the transactions have already replicated when they arrive at the scheduler are not fully clear to me from their design doc here: https://github.com/anza-xyz/agave/blob/master/docs/src/proposals/fee_transaction_priority.md.
Beta Was this translation helpful? Give feedback.
All reactions