Nakamoto Milestone 1: Producing a node implementation "first" #3855
kantai
started this conversation in
Blockchain
Replies: 1 comment
-
Here is just a slightly more flushed out post for the block producer mockamoto release vs follow up milestones. Block Producer Requirements:Initial (Mockamoto) Release Assumptions:
Requirements:
Follow up Release Requirements:
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
The goal of milestone 1 ("mockamoto") is to produce an implementation of the high-level
interfaces required by the Stacks Nakamoto proposal -- a working
stacks-node
thatoperates with Nakamoto rules but excludes as much of the actual implementation as possible.
This milestone replaces actual implementation of the bitcoin transaction processing system
with a mocked interface. sBTC is not included in this milestone, nor are the required changes
to the p2p stack, nor is the actual FROST signing and validation. Producer set selection
is not implemented (is instead mocked), and so is stacker set selection.
The idea behind this milestone is that it produces a testable and demoable artifact while
also making the interfaces required of the eventual components more clear. Once in place,
implementation of the p2p stack, sBTC integrations, signing integrations, and bitcoin transaction
processing can all proceed confident in how they must be integrated into a
stacks-node
.Required Features
Event interface for StackerDB notifications
To avoid any need to perform a polling loop, binaries like the block
producer and stacker signer need to be able to receive push
notifications when relevant StackerDB data arrives in their paired
stacks-node
.Unless it becomes an issue for stalling (which I don't expect), this
should just use the existing event observer interface. This means
adding new events for StackerDB notifications.
Mocked implementation of Nakamoto ProcessedBurnDB
This should be a struct with the necessary methods for returning
mocked data to its consumers. Initially, this should be relatively
few methods. It will be more clear which methods exactly are
necessary here once implementation on the other milestone 1 components
starts, but an educated guess of those methods is:
Note: after M1, when the real implementation of
ProcessedBurnDB
happens, processing in this DB should occur somewhat independently
of processing in the
NakamotoChainState
. It will just need tostop processing if it gets to a reward cycle boundary.
NakamotoStacksBlock and NakamotoStacksBlockHeader structs
NakamotoStacksBlockHeader should contain:
The NakamotoStacksBlock will be struct containing the block header and
a Vec of the transactions (similar to the existing
StacksBlock
struct).Chain State Table for Nakamoto Headers
This is a new table in the same SQLite db as the current
StacksChainState
block headers DB. It should be accessed through a new struct
NakamotoChainState
.Each row of this table should store
NakamotoStacksBlockHeader
data, keyed byStacksBlockId
.This should be a MARFed data store to assist with ancestor queries.
New DB for Nakamoto Staging Blocks
This should be a new SQLite db which contains a table
StagingBlocks
. This will be used to store new StacksBlocks as theyarrive without needing to block on other block processing.
Chain State Interface for Nakamoto Block Processing
store_block()
- stores a block that is signed by stackers andproducers in staging db.
append_block()
- validates and applies the transactions in astaged block to the clarity chainstate. Marks the block as
processed. This should validate that the block has no siblings.
get_staging_block(StacksBlockId)
- returns a stored blockget_next_processable_block()
- returns a stored block that isprocessable
is_processed(StacksBlockId)
- returns whether or not a givenblock has been processed
Query method for canonical Stacks block
This is a method in the nakamoto chain state struct
get_canonical_stacks_block()
which accepts a read-only connection tothe sortition database and uses it to help in returning the canonical
Stacks block by figuring out:
that fork that a stacks block has witnessed? (Each stacks block
entry in the db stores the bitcoin block hash of the latest bitcoin
block data that it witnessed, so this can be queried just using the
nakamoto db). Note that this implicitly relies on a block
processing guarantee: a stacks block is only processed if the
bitcoin block hash in its header was processed by the node.
block.
RPC method for push block
The HTTP interface should have a new
/v3/
endpoint for receivinga producer and stacker-signed block. This RPC endpoint should essentially
just invoke the
store_block()
method of the nakamoto chain state.Comms channels for RPC handler and block template/validation
RPC methods for assembling a block template and validating a block must
be asynchronous: block assembly and validation are slow operations, and
the network stack should not stall while processing them. In order to
facilitate this, the RPC handler will send a request for template
generation or block validation over a comms channel. A different
thread will handle the work and then pass the result via the event
interface.
RPC method for block template request and fetch
This will be two new RPC methods. The first submits a request to
generate a block template. The handler should simply push a
request onto the RPC handler comms channel.
The actual template assembly should be handled by a thread spawned
in the main
stacks-node
entry point. Template assembly is identicalto block assembly today (i.e., invocation of
build_anchored_block
in the miner) except that it should produce a
NakamotoStacksBlock
.Once assembled, the
NakamotoStacksBlock
should be relayed to theevent dispatcher on a new event interface.
RPC method for block validation
This will be two new RPC methods. The first submits a request to
validate a proposed block. The handler should simply push a
request onto the RPC handler comms channel.
Block validation should accept a stacks block, execute it against
the chain tip, and ensure that it would have executed correctly.
Once validated, the result should be relayed to the event dispatcher
on a new event interface.
Block producer binary
This is a new binary that performs:
Note: it is important that the event observer endpoints not stall the
stacks-node
!New stacks-node entry point
A new entry point for
stacks-node
(switched into viamain.rs
) fornakamoto operation. This entry point is responsible for spawning threads
and initializing communication channels.
This entry point should:
NakamotoCoordinator
This thread consumes event notifications and triggers block processing.
Basically, it should have a comms channel similar to the existing coordinator thread, but since
there is not bitcoin block processing in M1, this thread would just signal on a notification that
a new Stacks block has been stored. It would then wake up, and attempt to process the new block.
Beta Was this translation helpful? Give feedback.
All reactions