Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ephemeral block headers to the history network spec #341

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
49 changes: 45 additions & 4 deletions history/history-network.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,13 @@ The history network supports the following protocol messages:
In the history network the `custom_payload` field of the `Ping` and `Pong` messages is the serialization of an SSZ Container specified as `custom_data`:

```python
custom_data = Container(data_radius: uint256)
custom_data = Container(data_radius: uint256, ephemeral_header_count: uint16)
custom_payload = SSZ.serialize(custom_data)
```

* The `data_radius` value defines the *distance* from the node's node-id for which other clients may assume the node would be interested in content.
* The `ephemeral_header_count` value defines the number of *recent* headers that this node stores. The maximum effective value for this is 8192.

### Routing Table

The history network uses the standard routing table structure from the Portal Wire Protocol.
Expand All @@ -79,7 +82,7 @@ The history network uses the standard routing table structure from the Portal Wi
The history network includes one additional piece of node state that should be tracked. Nodes must track the `data_radius` from the Ping and Pong messages for other nodes in the network. This value is a 256 bit integer and represents the data that a node is "interested" in. We define the following function to determine whether node in the network should be interested in a piece of content.

```python
interested(node, content) = distance(node.id, content.id) <= node.radius
interested(node, content) = distance(node.id, content.id) <= node.data_radius
```

A node is expected to maintain `radius` information for each node in its local node table. A node's `radius` value may fluctuate as the contents of its local key-value store change.
Expand Down Expand Up @@ -135,6 +138,10 @@ WITHDRAWAL_LENGTH = 64

SHANGHAI_TIMESTAMP = 1681338455
# Number sourced from EIP-4895

MAX_EPHEMERAL_HEADER_PAYLOAD = 256
# The maximum number of ephemeral headers that can be requested or transferred
# in a single request.
```

#### Encoding Content Values for Validation
Expand All @@ -157,7 +164,7 @@ each receipt/transaction and re-rlp-encode it, but only if it is a legacy transa

HistoricalHashesAccumulatorProof = Vector[Bytes32, 15]

BlockHeaderProof = Union[None, HistoricalHashesAccumulatorProof]
BlockHeaderProof = Union[HistoricalHashesAccumulatorProof]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could for now also leave this None in there, allowing this PR to get merged, before clients get ready to migrate data.

And that would allow us to decide on how to deal with it best (future wise).


BlockHeaderWithProof = Container(
header: ByteList[MAX_HEADER_LENGTH], # RLP encoded header in SSZ ByteList
Expand All @@ -166,7 +173,7 @@ BlockHeaderWithProof = Container(
```

> **_Note:_** The `BlockHeaderProof` allows to provide headers without a proof (`None`).
For pre-merge headers, clients SHOULD NOT accept headers without a proof
For pre-merge headers, clients **SHOULD NOT** accept headers without a proof
as there is the `HistoricalHashesAccumulatorProof` solution available.
For post-merge headers, there is currently no proof solution and clients MAY
accept headers without a proof.
Expand Down Expand Up @@ -200,6 +207,40 @@ content = SSZ.serialize(block_header_with_proof)
content_key = selector + SSZ.serialize(block_number_key)
```

##### Ephemeral Block Headers

This content type represents block headers *near* the HEAD of the chain. They are provable by tracing through the chain of `header.parent_hash` values. All nodes in the network are assumed to store some amount of this content. The `Ping.custom_data` and `Pong.custom_data` fields can be used to learn the number of recent headers that a client makes available.

> Note: The history network does not provide a mechanism for knowing the HEAD of the chain. Clients to this network **MUST** have an external oracle for this information. The Portal Beacon Network is able to provide this information.

> Note: The content-id for this data type is not meaningful.

> Note: This message is not valid for Gossip. Clients **SHOULD** not send or accept gossip messages for this content type.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of "Gossip", should we say "OFFER/ACCEPT"? We can clarify that this includes gossip as well, but that would be implied.

pipermerriam marked this conversation as resolved.
Show resolved Hide resolved

> Note: Clients **SHOULD** implement a mechanism to purge headers older than 8192 blocks from their content databases.

```python
# Content and content key

ephemeral_headers_key = Container(block_hash: Bytes32, ancestor_count: uint8)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Should we explicitly declare that ancestor_count can't be bigger than MAX_EPHEMERAL_HEADER_PAYLOAD? Or is it enough that it's implied by the content type?

Or is it ok to request more than 256, but answer shouldn't have more than that?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make these things explicitly stated. They are enforced by the encoding types but worth making them stated limits too.

selector = 0x04

BlockHeader = ByteList[MAX_HEADER_LENGTH]
ephemeral_header_payload = List(BlockHeader, limit=MAX_EPHEMERAL_HEADER_PAYLOAD)

content = SSZ.serialize(ephemeral_header_payload)
content_key = selector + SSZ.serialize(ephemeral_headers_key)
```

The `ephemeral_header_payload` is an SSZ list of RLP encoded block header
objects. The this object is subject to the following validity conditions.

* The list **MAY** be empty which signals that the responding node was unable to fulfill the request.
* The first element in the list **MUST** be the RLP encoded block header indicated by the `ephemeral_headers_key.block_hash` field from the content key.
* Each element after the first element in the list **MUST** be the RLP encoded block header indicated by the `header.parent_hash` of the previous item from the list.
* The list **SHOULD** contain no more than `ephemeral_headers_key.ancestor_count` items.


#### Block Body

After the addition of `withdrawals` to the block body in the [EIP-4895](https://eips.ethereum.org/EIPS/eip-4895),
Expand Down
Loading