diff --git a/.env.server.example b/.env.server.example index 8d2680a8..e46cc73f 100644 --- a/.env.server.example +++ b/.env.server.example @@ -4,12 +4,6 @@ L1_BEACON_RPC= L2_RPC= L2_NODE_RPC= -# Note: These are used because of how the OP Proposer loads the txMgr config in the Optimism monorepo. -L1_ETH_RPC= -BEACON_RPC= -ROLLUP_RPC= - - # op-proposer configuration POLL_INTERVAL= L2OO_ADDRESS= diff --git a/book/SUMMARY.md b/book/SUMMARY.md index 4dba4bd5..6b36040c 100644 --- a/book/SUMMARY.md +++ b/book/SUMMARY.md @@ -7,4 +7,5 @@ - [L2 Output Oracle](./getting-started/l2-output-oracle.md) - [Proposer](./getting-started/proposer.md) - [Configuration](./getting-started/configuration.md) -- [Cost Estimator CLI Tool](./cost-estimator.md) \ No newline at end of file +- [Cost Estimator CLI Tool](./cost-estimator.md) +- [L2 Node Setup](./node-setup.md) diff --git a/book/cost-estimator.md b/book/cost-estimator.md index 4c2a4889..eafd6090 100644 --- a/book/cost-estimator.md +++ b/book/cost-estimator.md @@ -1,4 +1,4 @@ -# Cycle Counts [Cost Estimator] +# Cost Estimator We provide a convenient CLI tool to estimate the RISC-V cycle counts (and cost) for generating ZKPs for a range of blocks for a given rollup. @@ -19,44 +19,52 @@ It is required that the L2 RPC is an archival node for your OP stack rollup, wit Then run the following command: ```shell -RUST_LOG=info just run-multi +RUST_LOG=info just cost-estimator ``` -This will fetch the required data for generating a ZKP for the given block range, "execute" the -corresponding SP1 program, and return the cycle count. Then, you can extrapolate the cycle count -to a cost based on the cost per billion cycles. +This command will execute `op-succinct` as if it's in production. First, it will divide the entire block range +into smaller ranges optimized along the span batch boundaries. Then it will fetch the required data for generating the ZKP for each of these ranges, and execute the SP1 `span` program. Once each program finishes, it will collect the statistics and output the aggregate statistics +for the entire block range. From this data, you can extrapolate the cycle count to a cost based on the cost per billion cycles. -Proofs over a span batch are split into several "span proofs", which prove the validity of a section of the span batch. Then, these span proofs are aggregated into a single proof, which is submitted to the L1. +## Example -## Example Block Range - -On OP Sepolia, generating a proof from 15840000 to 15840050 (50 blocks) takes ~1.5B cycles and takes +On Optimism Sepolia, proving the block range 15840000 to 15840050 (50 blocks) generates 4 span proofs, takes ~1.8B cycles and ~2 minutes to execute. ```bash -RUST_LOG=info just run-multi 15840000 15840050 +RUST_LOG=info just cost-estimator 15840000 15840050 ...Execution Logs... +--------------------------------+---------------------------+ | Metric | Value | +--------------------------------+---------------------------+ -| Total Cycles | 1,502,329,547 | -| Block Execution Cycles | 1,009,112,508 | -| Total Blocks | 51 | -| Total Transactions | 202 | -| Cycles per Block | 19,786,519 | -| Cycles per Transaction | 4,995,606 | -| Transactions per Block | 3 | -| Total Gas Used | 52,647,751 | -| Gas Used per Block | 1,032,308 | -| Gas Used per Transaction | 260,632 | +| Batch Start | 16,240,000 | +| Batch End | 16,240,050 | +| Execution Duration (seconds) | 130 | +| Total Instruction Count | 1,776,092,063 | +| Oracle Verify Cycles | 237,150,812 | +| Derivation Cycles | 493,177,851 | +| Block Execution Cycles | 987,885,587 | +| Blob Verification Cycles | 84,995,660 | +| Total SP1 Gas | 2,203,604,618 | +| Number of Blocks | 51 | +| Number of Transactions | 160 | +| Ethereum Gas Used | 43,859,242 | +| Cycles per Block | 74,736,691 | +| Cycles per Transaction | 23,422,603 | +| Transactions per Block | 11 | +| Gas Used per Block | 3,509,360 | +| Gas Used per Transaction | 1,105,066 | +| BN Pair Cycles | 0 | +| BN Add Cycles | 0 | +| BN Mul Cycles | 0 | +| KZG Eval Cycles | 0 | +| EC Recover Cycles | 9,407,847 | +--------------------------------+---------------------------+ ``` ## Misc - For large enough block ranges, the RISC-V SP1 program will surpass the SP1 memory limit. Recommended limit is 20-30 blocks. - Your L2 node must have been synced for the blocks in the range you are proving. - - OP Sepolia Node: Synced from block 15800000 onwards. - - OP Mainnet Node: Synced from block 122940000 onwards. diff --git a/book/getting-started/l2-output-oracle.md b/book/getting-started/l2-output-oracle.md index a6041123..718cf117 100644 --- a/book/getting-started/l2-output-oracle.md +++ b/book/getting-started/l2-output-oracle.md @@ -1,4 +1,4 @@ -# L2 Output Oracle +# Deploy L2 Output Oracle The first step in deploying OP Succinct is to deploy a Solidity smart contract that will verify ZKPs of OP derivation (OP's name for their state transition function) and contain the latest state root of your rollup. @@ -25,7 +25,7 @@ Inside the `contracts` folder there is a file called `zkconfig.json` that contai | Parameter | Description | |-----------|-------------| -| `startingBlockNumber` | The L2 block number at which the rollup starts. Default should be 0. | +| `startingBlockNumber` | The L2 block number at which to start generating validity proofs. This should be set to the current L2 block number. You can fetch this with `cast bn --rpc-url `. | | `l2RollupNode` | The URL of the L2 rollup node. (After the tutorial, this is `http://localhost:8545`) | | `submissionInterval` | The number of L2 blocks between each L1 output submission. | | `l2BlockTime` | The time in seconds between each L2 block. | @@ -57,29 +57,69 @@ and then run the following command to deploy the contract: forge script script/ZKDeployer.s.sol:ZKDeployer \ --rpc-url $L1_RPC \ --private-key $PRIVATE_KEY \ + --ffi \ --verify \ --verifier etherscan \ --etherscan-api-key $ETHERSCAN_API_KEY \ - --broadcast \ - --ffi + --broadcast ``` If successful, you should see the following output: ``` -Submitting verification for [src/ZKL2OutputOracle.sol:ZKL2OutputOracle] 0xfe6BcbCD9c067d937431b54AfF107D4F8f2aC653. -Submitted contract for verification: - Response: `OK` - GUID: `qyc71u8whqpuf3cylumh3bdcf39a4nzv6ffpubfqef2jrzserr` - URL: https://sepolia.etherscan.io/address/0xfe6bcbcd9c067d937431b54aff107d4f8f2ac653 -Contract verification status: -Response: `NOTOK` -Details: `Pending in queue` -Contract verification status: -Response: `NOTOK` -Details: `Already Verified` -Contract source code already verified -All (2) contracts were verified! +Script ran successfully. + +== Return == +0: address 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95 + +## Setting up 1 EVM. + +========================== + +Chain 11155111 + +Estimated gas price: 11.826818849 gwei + +Estimated total gas used for script: 3012823 + +Estimated amount required: 0.035632111845100727 ETH + +========================== + +##### sepolia +✅ [Success]Hash: 0xc57d97ac588563406183969e8ea15bc06496915547114b1df4e024c142df07b4 +Contract Address: 0x2e4a7Dc6F19BdE1edF1040f855909afF7CcBeDeC +Block: 6633852 +Paid: 0.00858210364707003 ETH (1503205 gas * 5.709203766 gwei) + + +##### sepolia +✅ [Success]Hash: 0x1343094b0be4e89594aedb57fb795d920e7cc1a76288485e8cf248fa206321ed +Block: 6633852 +Paid: 0.001907479233443196 ETH (334106 gas * 5.709203766 gwei) + + +##### sepolia +✅ [Success]Hash: 0x708ce24c69c2637cadd6cffc654cbe2114e9ea4ec1e69838cd45c1fa27981713 +Contract Address: 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95 +Block: 6633852 +Paid: 0.00250654027540581 ETH (439035 gas * 5.709203766 gwei) + +✅ Sequence #1 on sepolia | Total Paid: 0.012996123155919036 ETH (2276346 gas * avg 5.709203766 gwei) + + +========================== + +ONCHAIN EXECUTION COMPLETE & SUCCESSFUL. +## +Start verification for (2) contracts +Start verifying contract `0x9b520F7d8031d45Eb8A1D9fE911038576931ab95` deployed on sepolia + +Submitting verification for [lib/optimism/packages/contracts-bedrock/src/universal/Proxy.sol:Proxy] 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95. + +... ``` -Keep note of the address of the `ZKL2OutputOracle` contract that was deployed. You will need it in the next few sections. \ No newline at end of file +Keep note of the address of the `Proxy` contract that was deployed, which in this case is `0x9b520F7d8031d45Eb8A1D9fE911038576931ab95`. + +It is also returned by the script as `0: address 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95`. \ No newline at end of file diff --git a/book/getting-started/prerequisites.md b/book/getting-started/prerequisites.md index 314d5353..ed56e147 100644 --- a/book/getting-started/prerequisites.md +++ b/book/getting-started/prerequisites.md @@ -7,6 +7,15 @@ You must have the following installed: - [Foundry](https://book.getfoundry.sh/getting-started/installation) - [Docker](https://docs.docker.com/get-started/) +You must have the following RPCs available: +- L1 Archive Node +- L1 Consensus (Beacon) Node +- L2 Archive Node +- L2 Rollup Node + +If you do not have an L2 OP Geth node + rollup node running for your rollup, you can follow the [node setup instructions](../node-setup.md) to get started. + + ## OP Stack Chain The rest of this section will assume you have an existing OP Stack Chain running. If you do not have one, there are two ways you can get started: diff --git a/book/getting-started/proposer.md b/book/getting-started/proposer.md index ef4ee3d9..cf156422 100644 --- a/book/getting-started/proposer.md +++ b/book/getting-started/proposer.md @@ -6,36 +6,60 @@ The `op-succinct-proposer` service will call to [Succinct's Prover Network](http The modified proposer performs the following tasks: 1. Monitors L1 state to determine when to request a proof. -2. Requests proofs from the OP Succinct server. -3. Once proofs have been generated for a sufficiently large range, aggregates batch proofs and submits them on-chain. +2. Requests proofs from the OP Succinct server. The server sends requests to the Succinct Prover Network. +3. Once proofs have been generated for a sufficiently large range, aggregates span proofs and submits them on-chain. We've packaged the `op-succinct-proposer` service in a docker-compose file to make it easier to run. -## 1) Build the Proposer +## 1) Set Proposer Parameters + +In the root directory, create a file called `.env` (mirroring `.env.example`) and set the following environment variables: + +| Parameter | Description | +|-----------|-------------| +| `L1_RPC` | The RPC URL for the L1 Ethereum node. | +| `L1_BEACON_RPC` | The RPC URL for the L1 Ethereum consensus node. | +| `L2_RPC` | The RPC URL for the L2 archive node (OP-Geth). | +| `L2_NODE_RPC` | The RPC URL for the L2 node. | +| `POLL_INTERVAL` | The interval at which to poll for new L2 blocks. | +| `L2OO_ADDRESS` | The address of the L2OutputOracle contract. | +| `PRIVATE_KEY` | The private key for the `op-proposer` account. | +| `L2_CHAIN_ID` | The chain ID of the L2 network. | +| `MAX_CONCURRENT_PROOF_REQUESTS` | The maximum number of concurrent proof requests (default is 20). | +| `MAX_BLOCK_RANGE_PER_SPAN_PROOF` | The maximum block range per span proof (default is 30). | +| `OP_SUCCINCT_SERVER_URL` | The URL of the OP Succinct server (default is http://op-succinct-server:3000). | +| `PROVER_NETWORK_RPC` | The RPC URL for the Succinct Prover Network. | +| `SP1_PRIVATE_KEY` | The private key for the SP1 account. | +| `SP1_PROVER` | The type of prover to use (set to "network"). | +| `SKIP_SIMULATION` | Whether to skip simulation of the proof before sending to the SP1 server (default is true). | + + +## 2) Build the Proposer Build the docker images for the `op-succinct-proposer` service. ```bash -cd proposer -sudo docker-compose build +docker-compose build ``` -## 2) Run the Proposer +## 3) Run the Proposer This command launches the `op-succinct-proposer` service in the background. It launches two containers: one container that manages proof generation and another container that is a small fork of the original `op-proposer` service. +After a few minutes, you should see the `op-succinct-proposer` service start to generate span proofs. Once enough span proofs have been generated, they will be verified in an aggregate proof and submitted to the L1. + ```bash -sudo docker-compose up +docker-compose up ``` To see the logs of the `op-succinct-proposer` service, run: ```bash -sudo docker-compose logs -f +docker-compose logs -f ``` and to stop the `op-succinct-proposer` service, run: ```bash -sudo docker-compose down +docker-compose down ``` \ No newline at end of file diff --git a/book/node-setup.md b/book/node-setup.md new file mode 100644 index 00000000..5af9bf00 --- /dev/null +++ b/book/node-setup.md @@ -0,0 +1,33 @@ +# L2 Node Setup + +## Setup Instructions + +1. Clone [ops-anton](https://github.com/anton-rs/ops-anton) and follow the instructions in the README to set up your rollup. +2. Go to [op-node.sh](https://github.com/anton-rs/ops-anton/blob/main/L2/op-mainnet/op-node/op-node.sh#L4-L6) and set the `L2_RPC` to your rollup RPC. Modify the `l1` and `l1.beacon` to your L1 and L1 Beacon RPCs. Note: Your L1 node should be an archive node. +3. If you are starting a node for a different chain, you will need to modify `op-network` in `op-geth.sh` [here](https://github.com/anton-rs/ops-anton/blob/main/L2/op-mainnet/op-geth/op-geth.sh#L18) and `network` in `op-node.sh` [here](https://github.com/anton-rs/ops-anton/blob/main/L2/op-mainnet/op-node/op-node.sh#L10). +4. In `/L2/op-mainnet` (or the directory you chose): + 1. Generate a JWT secret `./generate_jwt.sh` + 2. `docker network create anton-net` (Creates a Docker network for the nodes to communicate on). + 3. `just up` (Starts all the services). + +Your `op-geth` endpoint will be available at the RPC port chosen [here](https://github.com/anton-rs/ops-anton/blob/main/L2/op-mainnet/op-geth/op-geth.sh#L7), which in this case is `8547` (e.g. `http://localhost:8547`). + +Your `op-node` endpoint (rollup node) will be available at the RPC port chosen [here](https://github.com/anton-rs/ops-anton/blob/main/L2/op-mainnet/op-node/op-node.sh#L13), which in this case is `5058` (e.g. `http://localhost:5058`). + +## Checking Sync Status + +After a few hours, your node should be fully synced and you can use it to begin generating ZKPs. + +To check your node's sync status, you can run the following commands: + +**op-geth:** + +```bash +curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://localhost:8547 +``` + +**op-node:** + +```bash +curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"optimism_syncStatus","params":[],"id":1}' http://localhost:5058 +``` \ No newline at end of file diff --git a/contracts/.env.example b/contracts/.env.example index 51017a05..91118ddb 100644 --- a/contracts/.env.example +++ b/contracts/.env.example @@ -1,2 +1,5 @@ +L1_RPC= # Rollup RPC URL -L2_NODE_RPC= \ No newline at end of file +L2_NODE_RPC= +PRIVATE_KEY= +ETHERSCAN_API_KEY= \ No newline at end of file diff --git a/contracts/script/ZKDeployer.s.sol b/contracts/script/ZKDeployer.s.sol index 1217bd89..2f720b5a 100644 --- a/contracts/script/ZKDeployer.s.sol +++ b/contracts/script/ZKDeployer.s.sol @@ -7,7 +7,7 @@ import {Utils} from "../test/helpers/Utils.sol"; import {Proxy} from "@optimism/src/universal/Proxy.sol"; contract ZKDeployer is Script, Utils { - function run() public { + function run() public returns (address) { vm.startBroadcast(); Config memory config = readJsonWithRPCFromEnv("zkconfig.json"); @@ -19,5 +19,7 @@ contract ZKDeployer is Script, Utils { upgradeAndInitialize(zkL2OutputOracleImpl, config, address(0), bytes32(0), 0); vm.stopBroadcast(); + + return config.l2OutputOracleProxy; } } diff --git a/contracts/zkconfig.json b/contracts/zkconfig.json index 9bd08921..85fa04ec 100644 --- a/contracts/zkconfig.json +++ b/contracts/zkconfig.json @@ -1,5 +1,5 @@ { - "startingBlockNumber": 16795981, + "startingBlockNumber": 16837928, "l2RollupNode": "", "submissionInterval": 150, "l2BlockTime": 2, diff --git a/justfile b/justfile index d2369c8a..79cfef61 100644 --- a/justfile +++ b/justfile @@ -27,6 +27,11 @@ run-multi start end use-cache="false" prove="false": cargo run --bin multi --release -- --start {{start}} --end {{end}} $CACHE_FLAG $PROVE_FLAG +# Runs the cost estimator for a given block range. +cost-estimator start end: + #!/usr/bin/env bash + cargo run --bin cost_estimator --release -- --start {{start}} --end {{end}} + # Runs the client program in native execution mode. Modified version of Kona Native Client execution: # https://github.com/ethereum-optimism/kona/blob/ae71b9df103c941c06b0dc5400223c4f13fe5717/bin/client/justfile#L65-L108 run-client-native l2_block_num l1_rpc='${L1_RPC}' l1_beacon_rpc='${L1_BEACON_RPC}' l2_rpc='${L2_RPC}' verbosity="-vvvv": diff --git a/proposer/op/proposer/db/db.go b/proposer/op/proposer/db/db.go index 59fef1fb..8c1e139b 100644 --- a/proposer/op/proposer/db/db.go +++ b/proposer/op/proposer/db/db.go @@ -211,6 +211,26 @@ func (db *ProofDB) GetLatestEndBlock() (uint64, error) { return uint64(maxEnd.EndBlock), nil } +// If a proof failed to be sent to the prover network, it's status will be set to FAILED, but the prover request ID will be empty. +// This function returns all such proofs. +func (db *ProofDB) GetProofsFailedOnServer() ([]*ent.ProofRequest, error) { + proofs, err := db.client.ProofRequest.Query(). + Where( + proofrequest.StatusEQ(proofrequest.StatusFAILED), + proofrequest.ProverRequestIDEQ(""), + ). + All(context.Background()) + + if err != nil { + if ent.IsNotFound(err) { + return nil, nil + } + return nil, fmt.Errorf("failed to query failed proof: %w", err) + } + + return proofs, nil +} + // Get all pending proofs with a status of requested and a prover ID that is not empty. func (db *ProofDB) GetAllPendingProofs() ([]*ent.ProofRequest, error) { proofs, err := db.client.ProofRequest.Query(). diff --git a/proposer/op/proposer/prove.go b/proposer/op/proposer/prove.go index 122ebda1..6cabf2b7 100644 --- a/proposer/op/proposer/prove.go +++ b/proposer/op/proposer/prove.go @@ -16,9 +16,10 @@ import ( "github.com/succinctlabs/op-succinct-go/proposer/db/ent/proofrequest" ) -// 1) Retry all failed proofs +// Process all of the pending proofs. func (l *L2OutputSubmitter) ProcessPendingProofs() error { - failedReqs, err := l.db.GetAllProofsWithStatus(proofrequest.StatusFAILED) + // Retrieve all proofs that failed without reaching the prover network (specifically, proofs that failed with no proof ID). + failedReqs, err := l.db.GetProofsFailedOnServer() if err != nil { return fmt.Errorf("failed to get proofs failed on server: %w", err) } @@ -30,7 +31,8 @@ func (l *L2OutputSubmitter) ProcessPendingProofs() error { } // Get all pending proofs with a status of requested and a prover ID that is not empty. - // TODO: There should be a proofrequest status where the prover ID is not empty. + // TODO: There should be a separate proofrequest status for proofs that failed before reaching the prover network, + // and those that failed after reaching the prover network. reqs, err := l.db.GetAllPendingProofs() if err != nil { return err @@ -83,22 +85,22 @@ func (l *L2OutputSubmitter) RetryRequest(req *ent.ProofRequest) error { l.Log.Error("failed to add new proof request", "err") return err } - } + } else { + // If a SPAN proof failed, assume it was too big and the SP1 runtime OOM'd. + // Therefore, create two new entries for the original proof split in half. + l.Log.Info("span proof failed, splitting in half to retry", "req", req) + tmpStart := req.StartBlock + tmpEnd := tmpStart + ((req.EndBlock - tmpStart) / 2) + for i := 0; i < 2; i++ { + err := l.db.NewEntryWithReqAddedTimestamp("SPAN", tmpStart, tmpEnd, 0) + if err != nil { + l.Log.Error("failed to add new proof request", "err", err) + return err + } - // If a SPAN proof failed, assume it was too big and the SP1 runtime OOM'd. - // Therefore, create two new entries for the original proof split in half. - l.Log.Info("span proof failed, splitting in half to retry", "req", req) - tmpStart := req.StartBlock - tmpEnd := tmpStart + ((req.EndBlock - tmpStart) / 2) - for i := 0; i < 2; i++ { - err := l.db.NewEntryWithReqAddedTimestamp("SPAN", tmpStart, tmpEnd, 0) - if err != nil { - l.Log.Error("failed to add new proof request", "err", err) - return err + tmpStart = tmpEnd + 1 + tmpEnd = req.EndBlock } - - tmpStart = tmpEnd + 1 - tmpEnd = req.EndBlock } return nil diff --git a/proposer/succinct/bin/cost_estimator.rs b/proposer/succinct/bin/cost_estimator.rs index 8af4c709..d91f2e30 100644 --- a/proposer/succinct/bin/cost_estimator.rs +++ b/proposer/succinct/bin/cost_estimator.rs @@ -14,7 +14,15 @@ use rayon::iter::{IntoParallelRefIterator, ParallelIterator}; use reqwest::Client; use serde::{Deserialize, Serialize}; use sp1_sdk::{utils, ProverClient}; -use std::{cmp::min, env, fs, future::Future, path::PathBuf, time::Instant}; +use std::{ + cmp::{max, min}, + env, fs, + future::Future, + net::TcpListener, + path::PathBuf, + process::{Command, Stdio}, + time::Instant, +}; use tokio::task::block_in_place; pub const MULTI_BLOCK_ELF: &[u8] = include_bytes!("../../../elf/range-elf"); @@ -53,7 +61,7 @@ struct SpanBatchRequest { #[derive(Deserialize, Debug, Clone)] struct SpanBatchResponse { - ranges: Vec, + ranges: Option>, } #[derive(Debug, Clone, Serialize, Deserialize)] @@ -90,8 +98,13 @@ async fn get_span_batch_ranges_from_server( let response: SpanBatchResponse = client.post(&query_url).json(&request).send().await?.json().await?; + // If the response is empty, return one range with the start and end blocks. + if response.ranges.is_none() { + return Ok(vec![SpanBatchRange { start, end }]); + } + // Return the ranges. - Ok(response.ranges) + Ok(response.ranges.unwrap()) } struct BatchHostCli { @@ -138,32 +151,39 @@ async fn run_native_data_generation( ) -> Vec { const CONCURRENT_NATIVE_HOST_RUNNERS: usize = 5; - let futures = split_ranges.chunks(CONCURRENT_NATIVE_HOST_RUNNERS).map(|chunk| async { + // Split the entire range into chunks of size CONCURRENT_NATIVE_HOST_RUNNERS and process chunks + // serially. Generate witnesses within each chunk in parallel. This prevents the RPC from + // being overloaded with too many concurrent requests, while also improving witness generation + // throughput. + let batch_host_clis = split_ranges.chunks(CONCURRENT_NATIVE_HOST_RUNNERS).map(|chunk| { let mut witnessgen_executor = WitnessGenExecutor::default(); let mut batch_host_clis = Vec::new(); for range in chunk.iter() { - let host_cli = data_fetcher - .get_host_cli_args(range.start, range.end, ProgramType::Multi) - .await - .unwrap(); + let host_cli = block_on(data_fetcher.get_host_cli_args( + range.start, + range.end, + ProgramType::Multi, + )) + .expect("Failed to get host CLI args."); batch_host_clis.push(BatchHostCli { host_cli: host_cli.clone(), start: range.start, end: range.end, }); - witnessgen_executor - .spawn_witnessgen(&host_cli) - .await + block_on(witnessgen_executor.spawn_witnessgen(&host_cli)) .expect("Failed to spawn witness generation process."); } - witnessgen_executor.flush().await.expect("Failed to flush witness generation."); + let res = block_on(witnessgen_executor.flush()); + if res.is_err() { + panic!("Failed to generate witnesses: {:?}", res.err().unwrap()); + } batch_host_clis }); - futures::future::join_all(futures).await.into_iter().flatten().collect() + batch_host_clis.into_iter().flatten().collect() } /// Utility method for blocking on an async function. @@ -224,6 +244,124 @@ fn write_execution_stats_to_csv( Ok(()) } +/// Aggregate the execution statistics for an array of execution stats objects. +fn aggregate_execution_stats(execution_stats: &[ExecutionStats]) -> ExecutionStats { + let mut aggregate_stats = ExecutionStats::default(); + let mut batch_start = u64::MAX; + let mut batch_end = u64::MIN; + for stats in execution_stats { + batch_start = min(batch_start, stats.batch_start); + batch_end = max(batch_end, stats.batch_end); + + // Accumulate most statistics across all blocks. + aggregate_stats.execution_duration_sec += stats.execution_duration_sec; + aggregate_stats.total_instruction_count += stats.total_instruction_count; + aggregate_stats.oracle_verify_instruction_count += stats.oracle_verify_instruction_count; + aggregate_stats.derivation_instruction_count += stats.derivation_instruction_count; + aggregate_stats.block_execution_instruction_count += + stats.block_execution_instruction_count; + aggregate_stats.blob_verification_instruction_count += + stats.blob_verification_instruction_count; + aggregate_stats.total_sp1_gas += stats.total_sp1_gas; + aggregate_stats.nb_blocks += stats.nb_blocks; + aggregate_stats.nb_transactions += stats.nb_transactions; + aggregate_stats.eth_gas_used += stats.eth_gas_used; + aggregate_stats.bn_pair_cycles += stats.bn_pair_cycles; + aggregate_stats.bn_add_cycles += stats.bn_add_cycles; + aggregate_stats.bn_mul_cycles += stats.bn_mul_cycles; + aggregate_stats.kzg_eval_cycles += stats.kzg_eval_cycles; + aggregate_stats.ec_recover_cycles += stats.ec_recover_cycles; + } + + // For statistics that are per-block or per-transaction, we take the average over the entire + // range. + aggregate_stats.cycles_per_block = + aggregate_stats.total_instruction_count / aggregate_stats.nb_blocks; + aggregate_stats.cycles_per_transaction = + aggregate_stats.total_instruction_count / aggregate_stats.nb_transactions; + aggregate_stats.transactions_per_block = + aggregate_stats.nb_transactions / aggregate_stats.nb_blocks; + aggregate_stats.gas_used_per_block = aggregate_stats.eth_gas_used / aggregate_stats.nb_blocks; + aggregate_stats.gas_used_per_transaction = + aggregate_stats.eth_gas_used / aggregate_stats.nb_transactions; + + // Use the earliest start and latest end across all blocks. + aggregate_stats.batch_start = batch_start; + aggregate_stats.batch_end = batch_end; + + aggregate_stats +} + +/// Build and manage the Docker container for the span batch server. Note: All logs are piped to +/// /dev/null, so the user doesn't see them. +fn manage_span_batch_server_container() -> Result<()> { + // Check if port 8080 is already in use + if TcpListener::bind("0.0.0.0:8080").is_err() { + info!("Port 8080 is already in use. Assuming span_batch_server is running."); + return Ok(()); + } + + // Build the Docker container if it doesn't exist. + let build_status = Command::new("docker") + .args([ + "build", + "-t", + "span_batch_server", + "-f", + "proposer/op/Dockerfile.span_batch_server", + ".", + ]) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .status()?; + if !build_status.success() { + return Err(anyhow::anyhow!("Failed to build Docker container")); + } + + // Start the Docker container. + let run_status = Command::new("docker") + .args(["run", "-p", "8080:8080", "-d", "span_batch_server"]) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .status()?; + if !run_status.success() { + return Err(anyhow::anyhow!("Failed to start Docker container")); + } + + // Sleep for 5 seconds to allow the server to start. + block_on(tokio::time::sleep(std::time::Duration::from_secs(5))); + Ok(()) +} + +/// Shut down Docker container. Note: All logs are piped to /dev/null, so the user doesn't see them. +fn shutdown_span_batch_server_container() -> Result<()> { + // Get the container ID associated with the span_batch_server image. + let container_id = String::from_utf8( + Command::new("docker") + .args(["ps", "-q", "-f", "ancestor=span_batch_server"]) + .stdout(Stdio::piped()) + .output()? + .stdout, + )? + .trim() + .to_string(); + + if container_id.is_empty() { + return Ok(()); // Container not running, nothing to stop + } + + // Stop the container. + let stop_status = Command::new("docker") + .args(["stop", &container_id]) + .stdout(Stdio::null()) + .stderr(Stdio::null()) + .status()?; + if !stop_status.success() { + return Err(anyhow::anyhow!("Failed to stop Docker container")); + } + Ok(()) +} + #[tokio::main] async fn main() -> Result<()> { dotenv::dotenv().ok(); @@ -231,10 +369,13 @@ async fn main() -> Result<()> { let args = HostArgs::parse(); let data_fetcher = OPSuccinctDataFetcher::new(); + let l2_chain_id = data_fetcher.get_chain_id(ChainMode::L2).await?; let rollup_config = RollupConfig::from_l2_chain_id(l2_chain_id).unwrap(); - // TODO: Modify fetch_span_batch_ranges to start up the Docker container. + // Start the Docker container if it doesn't exist. + manage_span_batch_server_container()?; + let span_batch_ranges = get_span_batch_ranges_from_server( &data_fetcher, args.start, @@ -253,5 +394,11 @@ async fn main() -> Result<()> { let execution_stats = execute_blocks_parallel(&host_clis, &prover, &data_fetcher).await; write_execution_stats_to_csv(&execution_stats, l2_chain_id, &args)?; + let aggregate_execution_stats = aggregate_execution_stats(&execution_stats); + println!("Aggregate Execution Stats: \n {}", aggregate_execution_stats); + + // Shutdown the Docker container for fetching span batches. + shutdown_span_batch_server_container()?; + Ok(()) } diff --git a/scripts/prove/CYCLE_COUNT.md b/scripts/prove/CYCLE_COUNT.md deleted file mode 100644 index e69de29b..00000000 diff --git a/utils/host/src/stats.rs b/utils/host/src/stats.rs index 37efb365..136fc5ad 100644 --- a/utils/host/src/stats.rs +++ b/utils/host/src/stats.rs @@ -6,7 +6,7 @@ use serde::{Deserialize, Serialize}; use sp1_sdk::{CostEstimator, ExecutionReport}; /// Statistics for the multi-block execution. -#[derive(Debug, Clone, Serialize, Deserialize)] +#[derive(Debug, Clone, Serialize, Deserialize, Default)] pub struct ExecutionStats { pub batch_start: u64, pub batch_end: u64, @@ -103,8 +103,12 @@ pub async fn get_execution_stats( let kzg_eval_cycles: u64 = *report.cycle_tracker.get("precompile-kzg-eval").unwrap_or(&0); let ec_recover_cycles: u64 = *report.cycle_tracker.get("precompile-ec-recover").unwrap_or(&0); - let cycles_per_block = block_execution_instruction_count / nb_blocks; - let cycles_per_transaction = block_execution_instruction_count / nb_transactions; + let total_instruction_count = report.total_instruction_count(); + + // Cycles per block, transaction are computed with respect to the total instruction count. + let cycles_per_block = total_instruction_count / nb_blocks; + let cycles_per_transaction = total_instruction_count / nb_transactions; + let transactions_per_block = nb_transactions / nb_blocks; let gas_used_per_block = total_gas_used / nb_blocks; let gas_used_per_transaction = total_gas_used / nb_transactions; @@ -113,7 +117,7 @@ pub async fn get_execution_stats( batch_start: start, batch_end: end, execution_duration_sec: execution_duration.as_secs(), - total_instruction_count: report.total_instruction_count(), + total_instruction_count, derivation_instruction_count, oracle_verify_instruction_count, block_execution_instruction_count,