-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for block header lookup via block number in the history network #334
Conversation
history/history-network.md
Outdated
> **_Note:_** The `BlockHeaderProof` allows to provide headers without a proof (`None`). | ||
For pre-merge headers, clients SHOULD NOT accept headers without a proof | ||
as there is the `HistoricalHashesAccumulatorProof` solution available. | ||
For post-merge headers, there is currently no proof solution and clients MAY |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is our plan here? Post merge we need to prove against the beacon chain root? And we have the issue of dealing with the mutable proof issue for blocks while they are in the latest 8192 blocks?
This seems like a can that keeps getting kicked down the road which needs to get dealt with soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like we could at least define everything pre-Capella since those proofs are all stable. Is there something stopping us from doing this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is in @kdeme and @ogenev area of research. It does seem like an important issue
- Add HistoricalRootsBlockProof for merge till capella blocks #287
- Add HistoricalSummariesBlockProof for headers from Capella onwards #291
I think these 2 PR made by Kim cover it, and it is a little out of context for this PR, I do see the reason it is being brought up, but I think those 2 PR's are better as they were made to address that I believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this note can be removed and instead just referred to the other one, instead of repeating it.
But this is not something new for this specific PR so a discussion here around this is probably not the best place, but I'll summarize it anyhow:
There is a plan for this, it is put in following 3 PRs:
- Add HistoricalRootsBlockProof for merge till capella blocks #287
- Add HistoricalSummariesBlockProof for headers from Capella onwards #291
- Add a suggestion on how to verify recent headers #292
pre-Capella one could be activated, it mostly depends on how ready clients are for this and if we want clients to have some better system to easily prune headers without proof first.
post-Capella ones mostly rely on beacon network being fully active (and there is the recent issue of the proof increase).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this note can be removed and instead just referred to the other one, instead of repeating it.
I moved things around a bit let me know if there is still an issue or if you think it could be done in a cleaner way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For post capella, couldn't we do something similar to what we do with pre-merge and prove against a fixed state root for each fork. This would allow us to fill in most of history with a few frozen hashes and would eliminate the need for clients to actively access beacon network data for older fork data.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For post capella, couldn't we do something similar to what we do with pre-merge and prove against a fixed state root for each fork. This would allow us to fill in most of history with a few frozen hashes and would eliminate the need for clients to actively access beacon network data for older fork data.
Sure, specifications do not say how one must get the HistoricalSummaries
.
Clients could already bake these in at a certain timestamp. Of course they would have to regularly update this in their binary to allow for newer periods to be covered. Or, a user could perhaps manually load them via CLI through some json/yaml/ssz file that holds it. If I were to implement this I'd probably go for the latter.
history/history-network.md
Outdated
@@ -20,7 +20,7 @@ In addition, the chain history network provides individual epoch records for the | |||
- Ommers | |||
- Withdrawals | |||
- Receipts | |||
- Header epoch records (pre-merge only) | |||
- Block header indexes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove this as it's actually the block headers that are provided, but accessed by number
history/history-network.md
Outdated
@@ -261,17 +259,27 @@ content_key = selector + SSZ.serialize(receipt_key) | |||
|
|||
Note: The type-specific receipts encoding might be different for future receipt types, but this content encoding is agnostic to the underlying receipt encodings. | |||
|
|||
#### Epoch Record | |||
#### Block Header Indexes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would call it also here Block Header by Number (and perhaps refer to the other one as by hash)
history/history-network.md
Outdated
@@ -8,7 +8,7 @@ The chain history network is a [Kademlia](https://pdos.csail.mit.edu/~petar/pape | |||
|
|||
Execution chain history data consists of historical block headers, block bodies (transactions, ommers and withdrawals) and block receipts. | |||
|
|||
In addition, the chain history network provides individual epoch records for the full range of pre-merge blocks mined before the transition to proof of stake. | |||
In addition, the chain history network provides block number to historical block header lookups |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nitpick period at end of sentence
history/history-network.md
Outdated
# Content and content key | ||
|
||
block_number_key = Container(block_number: uint64) | ||
selector = 0x04 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the reasoning not to reuse the 0x03 value of Epoch Records?
Avoiding some clashing of older versions that still decode this differently and it being more clear when something sends the old 0x03 data?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Avoiding some clashing of older versions that still decode this differently and it being more clear when something sends the old 0x03 data?
Initially
What's the reasoning not to reuse the 0x03 value of Epoch Records?
I am fine reusing it, I wasn't sure if that was ok, I will update it now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure myself / have no strong preference.
With no reuse the transition period can be somewhat more clean in terms of error logs, and there are enough numbers available.
But I don't think it's really an issue to re-use either, and the end result is more clean :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm inclined towards re-using rather than leaving a gap. Not a strong feeling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me.
Great idea here. Currently in Fluffy, our state endpoints implementation (eth_getBalance, eth_getCode, ... etc) does two lookups in order to get the state root which is required in order to return any state data. The first lookup gets the block hash for the given block number by looking up the epoch record and then looking up the block header via block hash to get the state root. With this change we will only need one history network lookup instead of two. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
⛵
Offshoot of this PR #333 you can read for a deeper background or reasoning.
In the other PR I list Pro's and Con's of certain block index model's and in the end come to the conclusion that it would probably just make more sense to store these values in the History network, as the total storage size for these indexes will be around 22ish GB with proofs. Which is extremely small in comparison with the requirements to store block bodies and receipts which is around 749.94GB so I think it is well justified.
In the same note we have previously discussed #268 removing
epoch accumulator history network content type
, but decided not to due to not having a better solution during the time.With this proposal we add
block number -> block header
lookup capabilitiesHere is a copy paste from my other PR why we want these indexes
In short this proposal offers a much better UX experience and enables us to properly implement the JSON-RPC