Skip to content

Commit

Permalink
Proofreading the Project's Documentation (#111)
Browse files Browse the repository at this point in the history
  • Loading branch information
joaolago1113 authored Dec 7, 2023
1 parent 227ccd7 commit 195a784
Show file tree
Hide file tree
Showing 9 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion api/docs/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
This folder contains the API documentation for the gRPC services included in the EigenDA platform. Each markdown file contains the protobuf definitions for each respective service including:
- Churner: a hosted service responsible for maintaining the active set of Operators in the EigenDA network based on their delegated TVL.
- Disperser: the hosted service and primary point of interaction for Rollup users.
- Node: individual EigenDA nodes ran on the network by EigenLayer Operators.
- Node: individual EigenDA nodes run on the network by EigenLayer Operators.
- Retriever: a service that users can run on their own infrastructure, which exposes a gRPC endpoint for retrieval of blobs from EigenDA nodes.

2 changes: 1 addition & 1 deletion api/docs/node.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ BatchHeader (see core/data.go#BatchHeader)

### Blob
In EigenDA, the original blob to disperse is encoded as a polynomial via taking
taking different point evaluations (i.e. erasure coding). These points are split
different point evaluations (i.e. erasure coding). These points are split
into disjoint subsets which are assigned to different operator nodes in the EigenDA
network.
The data in this message is a subset of these points that are assigned to a
Expand Down
4 changes: 2 additions & 2 deletions docs/design/assignment.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Note that if blob data were distributed exactly in accordance with stake, an ope

$$\gamma \tilde{B}_i = B\frac{S_i}{\sum_j S_j}.$$

We require that portion of the blob stored by n operator $i$ will exceed its proportional allocation by no more than $B/n\gamma$. That is
We require that portion of the blob stored by an operator $i$ will exceed its proportional allocation by no more than $B/n\gamma$. That is

$$\max_{\{S_j:j\in O\}} \gamma\frac{B_i - \tilde{B}_i}{B} \le 1/n.$$

Expand Down Expand Up @@ -118,7 +118,7 @@ We can therefore satisfy requirement 2 by letting $\rho=1$.

It turns out that to meet the desired requirements, we do not need to increase the encoding complexity (i.e. decrease chunk size) compared to the default case. An increase in the total number of chunks due to the `ceil()` function can be handled by increasing the number of parity symbols.

Moreover, the optimization routing described for finding $m$ will serve only to improve beyond the baseline (lower bound), which already achieves desired performance.
Moreover, the optimization routine described for finding $m$ will serve only to improve beyond the baseline (lower bound), which already achieves desired performance.

## FAQs

Expand Down
2 changes: 1 addition & 1 deletion docs/spec/components/indexer.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ func (i Indexer) Index(){
myLatestHeader := i.HeaderService.GetLatestHeader(true)

// TODO: Also if there are no headers synced
// Fast forward it it's too many blocks to catch up
// Fast forward if it's too many blocks to catch up
if syncFromBlock - myLatestHeader.Number > maxSyncBlocks {

// This probably just wipes the HeaderStore clean
Expand Down
2 changes: 1 addition & 1 deletion docs/spec/integrations/disperser.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ func BlobStoreRequestsToPoly(blobStoreRequests []BlobStoreRequest) ([]fr.Element
blobDataStartDegrees = append(blobDataStartDegrees, len(overallPoly))
overallPoly = append(overallPoly, poly...)
}
return overallPoly, bobIDs, blobDataStartDegrees
return overallPoly, blobIDs, blobDataStartDegrees
}
```
The disperser returns to each requester the KZG commitment to the `overallPoly` that their data was included in, its start and end degrees, and the corresponding [DataStoreHeader](../spec/types/node-types.md#datastoreheader) that the blob was included in.
Expand Down
2 changes: 1 addition & 1 deletion docs/spec/protocol-modules/attestation/attestation.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This document discusses how these properties are achieved by the attestation pro

### Sufficient stake checking

The [BLSRegistry.sol](../contracts-registry.md) maintains the `pubkeyToStakeHistory` and `pubKeyToIndexHistory` storage variables, which allow for the the current stake and index of each operator to be retrieved for an arbitrary block number. These variables are updated whenever DA nodes register or deregister.
The [BLSRegistry.sol](../contracts-registry.md) maintains the `pubkeyToStakeHistory` and `pubKeyToIndexHistory` storage variables, which allow for the current stake and index of each operator to be retrieved for an arbitrary block number. These variables are updated whenever DA nodes register or deregister.

TODO: Describe quorum storage variables.

Expand Down
4 changes: 2 additions & 2 deletions docs/spec/protocol-modules/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The overall security guarantee provided by EigenDA is actually a composite of ma
The main guarantee supported by the attestation module concerns the on-chain conditions under which a batch is able to be confirmed by the EigenDA smart contracts. In particular, the attestation module is responsible for upholding the following guarantee:
- Sufficient stake checking: A blob is only accepted on-chain when signatures from operators having sufficient stake on each quorum are presented.

The Attestation module is largely implemented by the EigenDA smart contracts via bookkeeping of stake and associated checks performed at the batch confirmation phase of the [Disperal Flow](../flows/dispersal.md). For more details, see the [Attestation module documentation](./attestation/attestation.md.md)
The Attestation module is largely implemented by the EigenDA smart contracts via bookkeeping of stake and associated checks performed at the batch confirmation phase of the [Disperal Flow](../flows/dispersal.md). For more details, see the [Attestation module documentation](./attestation/attestation.md)

## Storage
The main guarantee supported by the storage module concerns the off-chain conditions which mirror the on-chain conditions of the storage module. In particular, the storage module is responsible for upholding the following guarantee:
Expand All @@ -20,4 +20,4 @@ The Storage module is largely implemented by the DA nodes, with an untrusted sup
The main guarantee supported by the retrieval module concerns the retrievability of stored blob data by honest consumers of that data. In particular, the retrieval module is responsible for upholding the following guarantee:
- TODO: Articulate the retrieval guarantee that we support.

For more details, see the [Retrieval module documentation](.retrieval/retrieval.md)
For more details, see the [Retrieval module documentation](./retrieval/retrieval.md)
10 changes: 5 additions & 5 deletions inabox/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ make run-e2e
## Manually deploy the experiment and interact with the services
### Preliminiary setup steps
### Preliminary setup steps
Ensure that all submodules (e.g. EigenLayer smart contracts) are checked out to the correct branch, and then build the binaries.
```
Expand Down Expand Up @@ -62,7 +62,7 @@ This will
- Start all test infrastructure (localstack, graph node, anvil chain)
- Create the necessary AWS resources on localstack
- Deploy the smart contracts to anvil
- Deploy subgraphs to the grpah node
- Deploy subgraphs to the graph node
- Create configurations for the eigenda services (located in `inabox/testdata/DATETIME/envs`)
To view the logs generated by the graph node, run the following command
Expand Down Expand Up @@ -116,7 +116,7 @@ Running experiment in ./testdata/12D-07M-2023Y-14H-41M-19S/
2023/07/12 14:41:24 Deploying experiment...
Deploying EigenDA
Generating variables
Test environment has succesfully deployed!
Test environment has successfully deployed!
```
If there are any deployment errors, look at `inabox/testdata/DATETIME/deploy.log` for a detailed log.
Expand All @@ -130,9 +130,9 @@ Run the binaries:
cd inabox
./bin.sh start
```
This will print all logs from the EigenDA services to the screen; `Crtl+C` will stop all services. Inspect the logs to make sure all binaries started without any errors.
This will print all logs from the EigenDA services to the screen; `Ctrl+C` will stop all services. Inspect the logs to make sure all binaries started without any errors.
Alternatively, you can start and stop the EigenDA services in detacbed mode by running `./bin.sh start-detached` and `./bin.sh stop-detached`, respectively. In this case, the logs are saved to `inabox/testdata/DATETIME/logs`.
Alternatively, you can start and stop the EigenDA services in detached mode by running `./bin.sh start-detached` and `./bin.sh stop-detached`, respectively. In this case, the logs are saved to `inabox/testdata/DATETIME/logs`.
Disperse a blob:
```
Expand Down
2 changes: 1 addition & 1 deletion pkg/encoding/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@

- is built upon crypto primitive from https://pkg.go.dev/github.com/protolambda/go-kzg

- accepts arbitrary number of systematic nodes, parity nodes and data size, free of restricton on power of 2
- accepts arbitrary number of systematic nodes, parity nodes and data size, free of restriction on power of 2

0 comments on commit 195a784

Please sign in to comment.