Document not found (404)
+This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +diff --git a/.gitattributes b/.gitattributes new file mode 100644 index 0000000000..535897dae6 --- /dev/null +++ b/.gitattributes @@ -0,0 +1,2 @@ +**/*.asm linguist-language=Rust +**/*.pil linguist-language=Rust diff --git a/.github/runner/Dockerfile b/.github/runner/Dockerfile new file mode 100644 index 0000000000..396581880a --- /dev/null +++ b/.github/runner/Dockerfile @@ -0,0 +1,55 @@ +# +# Runner for powdr github actions. +# We don't automate runner token generation yet. This image should be used as follows: +# - generate a runner token in github (valid for ~1h) +# - build the docker image passing the token as argument: +# docker buildx build -t github-runner --build-arg TOKEN=THE_GENERATED_TOKEN . +# - this will create an image already registered it with github +# - the container will start the runner (./run.sh) by default. + +# this base image was taken from the Dockerfile in the github runner repo +FROM mcr.microsoft.com/dotnet/runtime-deps:6.0-jammy AS build + +ARG RUNNER_VERSION=2.319.1 + +RUN apt-get update && apt install -y curl \ + sudo \ + libicu70 \ + liblttng-ust1 \ + libkrb5-3 \ + zlib1g \ + libssl3 \ + git \ + build-essential \ + clang-15 \ + nlohmann-json3-dev \ + libpqxx-dev \ + nasm \ + libgmp-dev \ + uuid-dev \ + zstd + +RUN adduser --disabled-password --uid 1001 runner \ + && usermod -aG sudo runner \ + && echo "%sudo ALL=(ALL:ALL) NOPASSWD:ALL" > /etc/sudoers \ + && echo "Defaults env_keep += \"DEBIAN_FRONTEND\"" >> /etc/sudoers + +USER runner + +WORKDIR /home/runner + +RUN curl -f -L -o runner.tar.gz https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \ + && tar xzf ./runner.tar.gz \ + && rm runner.tar.gz + +RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s - -y + +ARG TOKEN +RUN test -n "$TOKEN" || (echo "must set github runner TOKEN: --build-arg TOKEN=XXX" && false) + +RUN ./config.sh --name arch-server --work work --replace --url https://github.com/powdr-labs/powdr --token ${TOKEN} + +# anything that should be in the PATH of the runner must be setup here +ENV PATH="/home/runner/.cargo/bin:$PATH" + +CMD ["./run.sh"] diff --git a/.github/workflows/build-cache.yml b/.github/workflows/build-cache.yml new file mode 100644 index 0000000000..e73491d428 --- /dev/null +++ b/.github/workflows/build-cache.yml @@ -0,0 +1,51 @@ +name: Generate rust cache for PR builds +on: + workflow_dispatch: + schedule: + - cron: '0 2 * * *' # run at 2 AM UTC + +env: + CARGO_TERM_COLOR: always + +jobs: + build: + runs-on: warp-ubuntu-2404-x64-4x + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + + ##### The block below is shared between cache build and PR build workflows ##### + - name: Install EStarkPolygon prover dependencies + run: sudo apt-get update && sudo apt-get install -y nlohmann-json3-dev libpqxx-dev nasm libgrpc++-dev libprotobuf-dev protobuf-compiler-grpc uuid-dev build-essential cmake pkg-config git + - name: Install Rust toolchain nightly-2024-09-21 (with clippy and rustfmt) + run: rustup toolchain install nightly-2024-09-21-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-09-21-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-09-21-x86_64-unknown-linux-gnu + - name: Install Rust toolchain 1.81 (stable) + run: rustup toolchain install 1.81-x86_64-unknown-linux-gnu + - name: Set cargo to perform shallow clones + run: echo "CARGO_NET_GIT_FETCH_WITH_CLI=true" >> $GITHUB_ENV + - name: Cargo check with Rust 1.81 (default features) + run: cargo +1.81-x86_64-unknown-linux-gnu check --all-targets + - name: Lint no default features + run: cargo clippy --all --all-targets --no-default-features --profile pr-tests --verbose -- -D warnings + - name: Lint all features + run: cargo clippy --all --all-targets --all-features --profile pr-tests --verbose -- -D warnings + - name: Format + run: cargo fmt --all --check --verbose + - name: Build + run: cargo build --all-targets --all --all-features --profile pr-tests --verbose + ############################################################################### + + - name: Save date of cache build + run: date -R -u > target/cache-build-date.txt + - name: ⚡ Save rust cache + uses: WarpBuilds/cache/save@v1 + with: + path: | + ~/.cargo/registry/index/ + ~/.cargo/registry/cache/ + ~/.cargo/git/db/ + target/ + Cargo.lock + key: ${{ runner.os }}-cargo-pr-tests diff --git a/.github/workflows/dead-links.yml b/.github/workflows/dead-links.yml new file mode 100644 index 0000000000..13d248eeda --- /dev/null +++ b/.github/workflows/dead-links.yml @@ -0,0 +1,11 @@ +name: Check markdown links +on: [pull_request, merge_group] +jobs: + markdown-link-check: + runs-on: ubuntu-24.04 + steps: + - uses: actions/checkout@v4 + - uses: gaurav-nelson/github-action-markdown-link-check@v1 + with: + use-quiet-mode: 'no' + use-verbose-mode: 'yes' diff --git a/.github/workflows/deploy-book.yml b/.github/workflows/deploy-book.yml new file mode 100644 index 0000000000..29ed1a93f5 --- /dev/null +++ b/.github/workflows/deploy-book.yml @@ -0,0 +1,45 @@ +# adapted from https://github.com/rust-lang/mdBook/wiki/Automated-Deployment%3A-GitHub-Actions#GitHub-Pages-Deploy + +name: Deploy book +on: + push: + branches: + - main + +jobs: + deploy: + runs-on: ubuntu-24.04 + permissions: + contents: write # To push a branch + pull-requests: write # To create a PR from that branch + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + - name: Install latest mdbook + run: | + tag=$(curl 'https://api.github.com/repos/rust-lang/mdbook/releases/latest' | jq -r '.tag_name') + url="https://github.com/rust-lang/mdbook/releases/download/${tag}/mdbook-${tag}-x86_64-unknown-linux-gnu.tar.gz" + mkdir mdbook + curl -sSL $url | tar -xz --directory=./mdbook + echo `pwd`/mdbook >> $GITHUB_PATH + - name: Deploy GitHub Pages + run: | + # generate cli docs and inject them into the book + cargo run --package powdr-cli -- --markdown-help > book/src/cli/README.md + # build the book + cd book + mdbook build + git worktree add gh-pages + git config user.name "Deploy from CI" + git config user.email "" + cd gh-pages + # Delete the ref to avoid keeping history. + git update-ref -d refs/heads/gh-pages + rm -rf * + mv ../book/* . + # restore the benchmark directory + git restore --source=origin/gh-pages -- dev + git add . + git commit -m "Deploy $GITHUB_SHA to gh-pages" + git push --force --set-upstream origin gh-pages diff --git a/.github/workflows/nightly-tests.yml b/.github/workflows/nightly-tests.yml new file mode 100644 index 0000000000..55b6ef19d6 --- /dev/null +++ b/.github/workflows/nightly-tests.yml @@ -0,0 +1,88 @@ +name: Nightly tests +on: + workflow_dispatch: + schedule: + - cron: '0 2 * * *' # run at 2 AM UTC + +env: + CARGO_TERM_COLOR: always + IS_NIGHTLY_TEST: true + POWDR_GENERATE_PROOFS: "true" + +jobs: + check_if_needs_running: + runs-on: ubuntu-24.04 + outputs: + status: ${{ steps.count.outputs.status }} + + steps: + - uses: actions/checkout@v4 + - name: Count recent commits + id: count + run: echo "status=$(git log --oneline --since '24 hours ago' | wc -l)" >> $GITHUB_OUTPUT + + udeps: + runs-on: ubuntu-22.04 + needs: check_if_needs_running + if: needs.check_if_needs_running.outputs.status > 0 + + steps: + - name: Checkout repository + uses: actions/checkout@v4 + + - name: Install nightly toolchain + uses: actions-rs/toolchain@v1 + with: + toolchain: nightly + override: true + + - name: Run cargo-udeps + uses: aig787/cargo-udeps-action@v1 + with: + version: 'latest' + args: '--all-targets' + + test_release: + runs-on: ubuntu-24.04 + needs: check_if_needs_running + if: needs.check_if_needs_running.outputs.status > 0 + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: ⚡ Cache rust + uses: actions/cache@v4 + with: + path: | + ~/.cargo/registry + ~/.cargo/git + target + key: ${{ runner.os }}-cargo-release-${{ hashFiles('**/Cargo.toml') }} + - name: ⚡ Cache nodejs + uses: actions/cache@v4 + with: + path: | + ~/pilcom/node_modules + key: ${{ runner.os }}-pilcom-node-modules + - name: Install Rust toolchain 1.81 + run: rustup toolchain install 1.81-x86_64-unknown-linux-gnu + - name: Install nightly + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install test dependencies + run: sudo apt-get update && sudo apt-get install -y binutils-riscv64-unknown-elf lld + - name: Install EStarkPolygon prover dependencies + run: sudo apt-get update && sudo apt-get install -y nlohmann-json3-dev libpqxx-dev nasm libgrpc++-dev libprotobuf-dev protobuf-compiler-grpc uuid-dev build-essential cmake pkg-config git + - name: Install pilcom + run: git clone https://github.com/0xPolygonHermez/pilcom.git && cd pilcom && npm install + - name: Check without Halo2 + run: cargo check --all --no-default-features + - name: Build + run: cargo build --all --release --all-features + - name: Run tests + # Number threads is set to 1 because the runner does not have enough memeory for more. + run: PILCOM=$(pwd)/pilcom/ cargo test --all --release --verbose --all-features -- --include-ignored --nocapture --test-threads=1 diff --git a/.github/workflows/nightly_tests_list.txt b/.github/workflows/nightly_tests_list.txt new file mode 100644 index 0000000000..69f6cf192e --- /dev/null +++ b/.github/workflows/nightly_tests_list.txt @@ -0,0 +1,60 @@ +test(=instruction_tests::add) | +test(=instruction_tests::addi) | +test(=instruction_tests::amoadd_w) | +test(=instruction_tests::and) | +test(=instruction_tests::andi) | +test(=instruction_tests::beq) | +test(=instruction_tests::bge) | +test(=instruction_tests::blt) | +test(=instruction_tests::bltu) | +test(=instruction_tests::bne) | +test(=instruction_tests::divu) | +test(=instruction_tests::j) | +test(=instruction_tests::jal) | +test(=instruction_tests::lb) | +test(=instruction_tests::lbu) | +test(=instruction_tests::lh) | +test(=instruction_tests::lhu) | +test(=instruction_tests::lrsc) | +test(=instruction_tests::lw) | +test(=instruction_tests::mul) | +test(=instruction_tests::mulhsu) | +test(=instruction_tests::mulhu) | +test(=instruction_tests::or) | +test(=instruction_tests::ori) | +test(=instruction_tests::remu) | +test(=instruction_tests::rvc) | +test(=instruction_tests::sb) | +test(=instruction_tests::sh) | +test(=instruction_tests::simple) | +test(=instruction_tests::sll) | +test(=instruction_tests::slli) | +test(=instruction_tests::slt) | +test(=instruction_tests::slti) | +test(=instruction_tests::sltiu) | +test(=instruction_tests::sltu) | +test(=instruction_tests::srai) | +test(=instruction_tests::srl) | +test(=instruction_tests::sub) | +test(=instruction_tests::sw) | +test(=instruction_tests::xor) | +test(=instruction_tests::xori) | +test(=byte_access) | +test(=dispatch_table_pie_relocation) | +test(=dispatch_table_static_relocation) | +test(=function_pointer) | +test(=many_chunks_memory) | +test(=memfuncs) | +test(=password) | +test(=print) | +test(=runtime_affine_256) | +test(=runtime_ec_double) | +test(=runtime_modmul_256) | +test(=runtime_poseidon_gl) | +test(=sum) | +test(=trivial) | +test(=two_sums_serde) | +test(=zero_with_values) | +test(=arith_large_test) | +(package(powdr-riscv) and test(=evm)) | +(package(powdr-riscv) and test(=features)) \ No newline at end of file diff --git a/.github/workflows/pr-tests.yml b/.github/workflows/pr-tests.yml new file mode 100644 index 0000000000..1ad5f45a3c --- /dev/null +++ b/.github/workflows/pr-tests.yml @@ -0,0 +1,310 @@ +name: PR tests + +on: + workflow_dispatch: + pull_request: + merge_group: + push: + branches: + - main + +# cancel any previous running workflows for the same branch +concurrency: + group: ${{ github.workflow }}-${{ github.ref }} + cancel-in-progress: ${{ github.ref != 'refs/heads/main' }} + +env: + CARGO_TERM_COLOR: always + POWDR_GENERATE_PROOFS: "true" + POWDR_JIT_OPT_LEVEL: "1" + MAX_DEGREE_LOG: "20" + +jobs: + build: + runs-on: warp-ubuntu-2404-x64-8x + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: ⚡ Restore rust cache + id: cache + uses: WarpBuilds/cache/restore@v1 + with: + path: | + ~/.cargo/registry/index/ + ~/.cargo/registry/cache/ + ~/.cargo/git/db/ + target/ + Cargo.lock + key: ${{ runner.os }}-cargo-pr-tests + - name: Date of the restored cache + run: cat target/cache-build-date.txt + continue-on-error: true + + ##### The block below is shared between cache build and PR build workflows ##### + - name: Install EStarkPolygon prover dependencies + run: sudo apt-get update && sudo apt-get install -y nlohmann-json3-dev libpqxx-dev nasm libgrpc++-dev uuid-dev + - name: Install Rust toolchain nightly-2024-12-17 (with clippy and rustfmt) + run: rustup toolchain install nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu + - name: Install Rust toolchain 1.81 (stable) + run: rustup toolchain install 1.81-x86_64-unknown-linux-gnu + - name: Set cargo to perform shallow clones + run: echo "CARGO_NET_GIT_FETCH_WITH_CLI=true" >> $GITHUB_ENV + - name: Cargo check with Rust 1.81 (default features) + run: cargo +1.81-x86_64-unknown-linux-gnu check --all-targets + - name: Lint no default features + run: cargo clippy --all --all-targets --no-default-features --profile pr-tests --verbose -- -D warnings + - name: Lint all features + run: cargo clippy --all --all-targets --all-features --profile pr-tests --verbose -- -D warnings + - name: Format + run: cargo fmt --all --check --verbose + - name: Build + run: cargo build --all-targets --all --all-features --profile pr-tests --verbose + ############################################################################### + + - uses: taiki-e/install-action@nextest + - name: Archive EStarkPolygon prover built dependencies + run: tar --zstd -cf pil-stark-prover-deps.tar.zst target/pr-tests/build/pil-stark-prover-*/out + - name: Create tests archive + run: cargo nextest archive --archive-file tests.tar.zst --cargo-profile pr-tests --workspace --all-features + - name: Upload build artifacts + uses: actions/upload-artifact@v4 + with: + name: tests_archive + path: | + tests.tar.zst + pil-stark-prover-deps.tar.zst + + test_quick: + needs: build + runs-on: ubuntu-24.04 + strategy: + matrix: + test: + - "1" + - "2" + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: Download build artifacts + uses: actions/download-artifact@v4 + with: + name: tests_archive + - name: ⚡ Cache nodejs + uses: actions/cache@v4 + with: + path: | + ~/pilcom/node_modules + key: ${{ runner.os }}-pilcom-node-modules + - name: Install Rust toolchain nightly-2024-12-17 (with clippy and rustfmt) + run: rustup toolchain install nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu + - name: Install nightly-2024-08-01 + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install test dependencies + run: sudo apt-get update && sudo apt-get install -y binutils-riscv64-unknown-elf lld + - name: Install pilcom + run: git clone https://github.com/0xPolygonHermez/pilcom.git && cd pilcom && npm install + - uses: taiki-e/install-action@nextest + - name: Run default tests + run: cargo nextest run --archive-file tests.tar.zst --workspace-remap . --verbose --partition count:"${{ matrix.test }}"/2 + env: + PILCOM: ${{ github.workspace }}/pilcom/ + POWDR_STD: ${{ github.workspace }}/std/ + + run_examples: + runs-on: warp-ubuntu-2404-x64-4x + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: ⚡ Restore rust cache + id: cache + uses: WarpBuilds/cache/restore@v1 + with: + path: | + ~/.cargo/registry/index/ + ~/.cargo/registry/cache/ + ~/.cargo/git/db/ + target/ + Cargo.lock + key: ${{ runner.os }}-cargo-pr-tests + - name: Install Rust toolchain nightly-2024-12-17 (with clippy and rustfmt) + run: rustup toolchain install nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu + - name: Install nightly + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install test dependencies + run: sudo apt-get update && sudo apt-get install -y binutils-riscv64-unknown-elf lld + - name: Run examples + run: cargo run --profile pr-tests --example hello_world && cargo run --profile pr-tests --example sqrt_with_publics && cargo run --profile pr-tests --example fibonacci + + test_estark_polygon: + needs: build + runs-on: ubuntu-24.04 + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: Download build artifacts + uses: actions/download-artifact@v4 + with: + name: tests_archive + - name: ⚡ Cache nodejs + uses: actions/cache@v4 + with: + path: | + ~/pilcom/node_modules + key: ${{ runner.os }}-pilcom-node-modules + - name: Install Rust toolchain nightly-2024-12-17(with clippy and rustfmt) + run: rustup toolchain install nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu + - name: Install nightly-2024-08-01 + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install pilcom + run: git clone https://github.com/0xPolygonHermez/pilcom.git && cd pilcom && npm install + - name: Install EStarkPolygon prover system dependency + run: sudo apt-get update && sudo apt-get install -y nlohmann-json3-dev + - uses: taiki-e/install-action@nextest + - name: Unpack EStarkPolygon built dependencies + run: tar --zstd -xf pil-stark-prover-deps.tar.zst + - name: Run EStark Polygon test + run: cargo nextest run --archive-file tests.tar.zst --workspace-remap . --verbose --run-ignored=ignored-only --no-capture -E "test(=vec_median_estark_polygon)" + env: + PILCOM: ${{ github.workspace }}/pilcom/ + POWDR_STD: ${{ github.workspace }}/std/ + + test_slow: + strategy: + matrix: + test: + - "1" + - "2" + - "3" + - "4" + - "5" + - "6" + - "7" + - "8" + needs: build + runs-on: ubuntu-24.04 + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: Download build artifacts + uses: actions/download-artifact@v4 + with: + name: tests_archive + - name: ⚡ Cache nodejs + uses: actions/cache@v4 + with: + path: | + ~/pilcom/node_modules + key: ${{ runner.os }}-pilcom-node-modules + - name: Install Rust toolchain nightly-2024-12-17 (with clippy and rustfmt) + run: rustup toolchain install nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add clippy --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu && rustup component add rustfmt --toolchain nightly-2024-12-17-x86_64-unknown-linux-gnu + - name: Install test dependencies + run: sudo apt-get update && sudo apt-get install -y binutils-riscv64-unknown-elf lld + - name: Install nightly-2024-08-01 + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install pilcom + run: git clone https://github.com/0xPolygonHermez/pilcom.git && cd pilcom && npm install + - uses: taiki-e/install-action@nextest + - name: Run slow tests + # Number threads is set to 2 because the runner does not have enough memory for more. + run: | + NIGHTLY_TESTS=$(cat .github/workflows/nightly_tests_list.txt) + cargo nextest run --archive-file tests.tar.zst --workspace-remap . --verbose --run-ignored=ignored-only -E "!($NIGHTLY_TESTS)" --test-threads 2 --partition hash:"${{ matrix.test }}"/8 + shell: bash + env: + PILCOM: ${{ github.workspace }}/pilcom/ + POWDR_STD: ${{ github.workspace }}/std/ + + bench: + runs-on: warp-ubuntu-2404-x64-4x + permissions: + contents: write + deployments: write + pull-requests: write + + steps: + - uses: actions/checkout@v4 + with: + submodules: recursive + - name: ⚡ Restore rust cache + id: cache + uses: WarpBuilds/cache/restore@v1 + with: + path: | + ~/.cargo/registry/index/ + ~/.cargo/registry/cache/ + ~/.cargo/git/db/ + target/ + Cargo.lock + key: ${{ runner.os }}-cargo-pr-tests + - name: Install Rust toolchain 1.81 + run: rustup toolchain install 1.81-x86_64-unknown-linux-gnu + - name: Install nightly + run: rustup toolchain install nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install std source + run: rustup component add rust-src --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install riscv target + run: rustup target add riscv32imac-unknown-none-elf --toolchain nightly-2024-08-01-x86_64-unknown-linux-gnu + - name: Install test dependencies + run: sudo apt-get update && sudo apt-get install -y binutils-riscv64-unknown-elf lld + - name: Install EStarkPolygon prover dependencies + run: sudo apt-get update && sudo apt-get install -y nlohmann-json3-dev libpqxx-dev nasm libgrpc++-dev libprotobuf-dev protobuf-compiler-grpc uuid-dev build-essential cmake pkg-config git + - name: Install pilcom + run: git clone https://github.com/0xPolygonHermez/pilcom.git && cd pilcom && npm install + - name: Run benchmarks + # we add `|| exit 1` to make sure the step fails if `cargo bench` fails + run: cargo bench --workspace --all-features -- --output-format bencher | tee output.txt || exit 1 + - name: Store benchmark result + uses: benchmark-action/github-action-benchmark@v1 + with: + name: Benchmarks + tool: "cargo" + output-file-path: output.txt + github-token: ${{ secrets.GITHUB_TOKEN }} + auto-push: true + alert-threshold: "120%" + comment-on-alert: true + summary-always: true + + udeps: + runs-on: ubuntu-22.04 + + steps: + - name: Checkout repository + uses: actions/checkout@v4 + + - name: Install nightly toolchain + uses: actions-rs/toolchain@v1 + with: + toolchain: nightly + override: true + + - name: Run cargo-udeps + uses: aig787/cargo-udeps-action@v1 + with: + version: "latest" + args: "--all-targets" diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..fb87f81494 --- /dev/null +++ b/.gitignore @@ -0,0 +1,16 @@ +# Generated by Cargo +# will have compiled files and executables +/target/ + +# Cargo configuration +/.cargo/ + +# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries +# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html +Cargo.lock + +# These are backup files generated by rustfmt +**/*.rs.bk + +cargo_target/ +riscv/tests/riscv_data/**/target diff --git a/404.html b/404.html new file mode 100644 index 0000000000..8d33ec90bc --- /dev/null +++ b/404.html @@ -0,0 +1,197 @@ + + +
+ + +This URL is invalid, sorry. Please use the navigation bar or search to continue.
+ +In this section, we explain how the powdr compiler reduces a program made of virtual and constrained machines to a set of AIRs.
+The first step is to reduce virtual machines to constrained machines. This step is run on all machines and does not affect constrained machines. +As a result of this step, for each machine:
+Block enforcement applies on constrained machines. It makes sure that the operation_id
is constant within each machine block.
At this point, all machines contain only:
+Let's define AIR as a data structure with only these elements.
+Starting from the main machine's type, we create a tree of AIR objects by traversing its submachines, recursively instantiating each machine as an AIR. +Let's define the AIR tree as the resulting tree.
+ +powdr applies a number of steps in order to reduce a powdr-asm program into PIL.
+We provide a high level overview of these steps.
+ ┌────────────┐ ┌──────────┐
+ │ │ │ │
+ powdr-asm │ │ AIR graph │ │ PIL
+───────────►│ compiler ├───────────┤ linker ├──────►
+ │ │ │ │
+ │ │ │ │
+ └────────────┘ └──────────┘
+
+
+ A linker is used to turn an AIR tree into a single PIL file. +The linking process operates in the following way:
+main_degree_range
be its degree.degrees-mode
is vadcop
, set the namespace degree to that of the AIR
+2b. If degrees-mode
is monolithic
, set the namespace degree to main_degree_range
The result is a monolithic AIR where:
+Symbols can be defined via let <name> = <value>;
, or via let <name>: <type> = <value>;
if you want to
+specify the type explicitly. The value is an arbitrary PIL-expression.
+For details, see the Declarations section in the PIL part.
Other symbols available in the current module can be accessed by name, but it is also possible to specify
+full relative paths in the form of e.g. super::super::module_name::symbol
.
Here are some examples of how to define and use symbols:
+mod utils {
+ // This defines a function by means of a lambda expression that
+ // computes the sum of an array of values. We fully specify its type.
+ let sum: int, int[] -> int = |len, arr| match len {
+ 0 => 0,
+ _ => arr[len - 1] + sum(len - 1, arr)
+ };
+ // A simple function that returns the input incremented by one,
+ // as an expression.
+ let incremented: expr -> expr = |x| x + 1;
+ // This is a function that takes an expression as input and returns
+ // a constraint enforcing this expression increments by a certain value
+ // between rows.
+ // The type will be inferred here because `'` is only valid on `expr`.
+ let constrain_incremented_by = |x, inc| x' = x + inc;
+}
+
+machine Main with degree: 4 {
+ // Machines create local scopes in the way functions create local scopes:
+ // - all symbols in the machine's module are available without prefix,
+ // - new symbols can be defined but are only available inside the machine.
+ reg A;
+ reg pc[@pc];
+
+ // This defines a witness column,
+ let x;
+ // and now we force it to stay unchanged.
+ utils::constrain_incremented_by(x, 0);
+
+ // We define an instruction that uses a complicated way to increment a register.
+ instr incr_a { A' = utils::incremented(A) }
+
+ function main {
+ incr_a;
+ return;
+ }
+}
+
+ Field element literals are signed elements of the prime field.
+ CNT <=X= 3;
+Registers can be used as expressions, with the exception of assignment registers.
+ CNT <=X= CNT - 1;
+Instructions which return outputs can be used as expressions.
+ A, B <== square_and_double(A);
+
+ Functions are the entry points to a virtual machine. They can be called from another machine or from the outside.
+In this section, we describe functions with this simple virtual machine:
+
+machine Machine with degree: 16 {
+ reg pc[@pc];
+ reg X[<=];
+ reg Y[<=];
+ reg Z[<=];
+ reg CNT;
+ reg A;
+ reg B;
+
+ // an instruction to assert that a number is zero
+ instr assert_zero X {
+ X = 0
+ }
+
+ // an instruction to jump to a label
+ instr jmp l: label {
+ pc' = l
+ }
+
+ // an instruction to jump to a label iff `X` is `0`, otherwise continue
+ instr jmpz X, l: label {
+ pc' = XIsZero * l + (1 - XIsZero) * (pc + 1)
+ }
+
+ // an instruction to return the square of an input as well as its double
+ instr square_and_double X -> Y, Z {
+ Y = X * X,
+ Z = 2 * X
+ }
+
+ function main {
+ // initialise `A` to 2
+ A <=X= 2;
+ // initialise `CNT` to `3`
+ CNT <=X= 3;
+ start:
+ // if `CNT` is `0`, jump to `end`
+ jmpz CNT, end;
+ // decrement `CNT`
+ CNT <=X= CNT - 1;
+ // get the square and the double of `A`
+ A, B <== square_and_double(A);
+ // jump back to `start`
+ jmp start;
+ end:
+ // check that `A == ((2**2)**2)**2`
+ assert_zero A - ((2**2)**2)**2;
+ // check that `B == ((2**2)**2)*2`
+ assert_zero B - ((2**2)**2)*2;
+ return;
+ }
+
+ // some superpowers on `X` to allow us to check if it's 0
+ col witness XInv;
+ col witness XIsZero;
+ XIsZero = 1 - X * XInv;
+ XIsZero * X = 0;
+ XIsZero * (1 - XIsZero) = 0;
+}
+
+++Function inputs and outputs are not supported yet
+
Labels allow referring to a location in a function by name.
+ start:
+Assignments allow setting the values of some write registers to the values of some expressions expression using assignment registers.
+ CNT <=X= 3;
+If the right-hand side of the assignment is an instruction, assignment registers can be inferred and are optional:
+ A, B <== square_and_double(A);
+This will be inferred to be the same as A, B <=Y, Z= square_and_double(A);
from the definition of the instruction:
instr square_and_double X -> Y, Z {
+ Y = X * X,
+ Z = 2 * X
+ }
+Instructions which do not return outputs can be used as statements.
+ assert_zero A - ((2**2)**2)**2;
+
+ powdr-asm is the higher level of abstraction in powdr. It allows defining Instruction Set Architectures (ISA) using virtual and constrained machines.
+ +Instructions are declared as part of a powdr virtual machine. +Once defined, they can be called by any function in this machine. +An instruction is composed of:
+A local instruction is the simplest type of instruction. +It is called local because its behavior is defined by a set of constraints over registers and columns of the machine it is defined in.
+instr add X, Y -> Z {
+ X + Y = Z
+}
+Instructions may also delegate all or part of their implementation to functions/operations in submachines.
+Each link
in an instruction defines the inputs and outputs of a call to a specific function/operation in a submachine.
Assume we have a submachine with a single operation add
:
machine SubMachine with
+ degree: 32,
+ latch: latch,
+ operation_id: operation_id
+{
+ col witness operation_id;
+ col fixed latch = [1]*;
+
+ operation add<0> x, y -> z;
+
+ col witness x;
+ col witness y;
+ col witness z;
+ z = y + x;
+}
+An instruction calling into this operation can be declared as follows:
+ SubMachine submachine;
+
+ instr add X, Y -> Z link => Z = submachine.add(X, Y); // - trivial usage: only instruction inputs/outputs used in the call
+In the previous example, only assignment registers (instruction inputs and outputs) were used to call the submachine.
+The following example shows more complex usage of link
calls:
machine Main with degree: 32 {
+ reg pc[@pc];
+ reg X[<=];
+ reg Y[<=];
+ reg Z[<=];
+ reg A;
+ reg B;
+ reg C;
+
+ SubMachine submachine;
+
+ instr add X, Y -> Z link => Z = submachine.add(X, Y); // - trivial usage: only instruction inputs/outputs used in the call
+ instr add_to_A X, Y link => A' = submachine.add(X, Y);// - output to a regular register
+ instr addAB -> X link => X = submachine.add(A, B); // - inputs from regular registers
+ instr addAB_to_C link => C' = submachine.add(A, B); // - inputs and output from regular registers
+ instr addAB_to_A link => A' = submachine.add(A, B); // - reusing an input register as output
+ instr sub X, Y -> Z link => X = submachine.add(Y, Z); // - swapping input/output
+ // expressions can also be used as call parameters
+ instr add5 X -> Z link => Z = submachine.add(X, 3+2); // - literal expression as argument
+ col fixed STEP(i) { i };
+ instr add_current_time_step X -> Z link => Z = submachine.add(X, STEP);// - machine columns can be referenced
+ let arr = [1,2,3,4,5]; // - functions can be used
+ instr add_arr_sum X -> Z link => Z = submachine.add(X, std::array::sum(arr));
+
+ instr assert_eq X, Y { X = Y }
+
+ function main {
+ A <== add(2, 3);
+ assert_eq A, 5;
+ add_to_A 6, 7;
+ assert_eq A, 13;
+
+ A <== sub(6, 5);
+ assert_eq A, 1;
+ B <=X= 20;
+ C <== addAB();
+ assert_eq C, 21;
+
+ A <=X= 2;
+ B <=X= 3;
+ addAB_to_C;
+ assert_eq C, 5;
+
+ A <=X= 33;
+ B <=X= 44;
+ addAB_to_A;
+ assert_eq A, 77;
+
+ A <== add5(2);
+ assert_eq A, 7;
+ A <== add_arr_sum(3);
+ assert_eq A, 18;
+
+ // Note that the result of this operation depends on when it executed (STEP column)
+ A <== add_current_time_step(42);
+ B <== add_current_time_step(42);
+ assert_eq B - A, 1;
+
+ return;
+ }
+}
+A single instruction can activate multiple links
, and may also include a set of constraints.
+Furthermore, each link can be activated conditionally, based on a given boolean flag:
machine Main with degree: 16 {
+ reg pc[@pc];
+ reg X[<=];
+ reg Y[<=];
+ reg Z[<=];
+ reg W[<=];
+ reg A;
+ col witness B;
+ col witness C;
+
+ SubMachine submachine;
+
+ // multiple links can be activated by a single instruction,
+ // witness columns can be used for temporary values,
+ // and additional constraints can be used
+ instr double_then_mul X, Y -> Z
+ link => B = submachine.add(X, X)
+ link => C = submachine.add(Y, Y)
+ {
+ Z = B * C
+ }
+
+ // links activated conditional on a boolean flag
+ instr add_or_sub W, X, Y -> Z
+ link if W => Z = submachine.add(X, Y)
+ link if (1 - W) => Z = submachine.sub(X, Y);
+
+ instr assert_eq X, Y { X = Y }
+
+ function main {
+ A <== double_then_mul(3, 2);
+ assert_eq A, 24;
+
+ A <== add_or_sub(1, 3, 2);
+ assert_eq A, 5;
+ A <== add_or_sub(0, 3, 2);
+ assert_eq A, 1;
+
+ return;
+ }
+}
+++ +Note that links cannot currently call functions from the same machine: they delegate computation to a submachine.
+
Links enable a constrained machine to call into another machine. +They are defined by a call to an operation, where inputs and outputs are expressions. +An optional boolean flag restricts the rows in which the link is active. +Links without a boolean flag are active in every row.
+machine Add4 with
+ degree: 32,
+ latch: latch,
+ operation_id: operation_id
+{
+ Add adder;
+
+ operation add4<0> x, y, z, w -> r;
+
+ // Links without a flag are active on every row.
+ // - constrain the values of `x`, `y`, and `n` so that `n = adder.add(x, y)`
+ link => n = adder.add(x, y);
+ // - constrain the values of `z`, `w`, and `m` so that `m = adder.add(z, w)`
+ link => m = adder.add(z, w);
+ // - constrain the values of `m`, `n` and `r` so that `r = adder.add(m,n)`
+ link => r = adder.add(m, n);
+
+ col fixed operation_id = [0]*;
+ col fixed latch = [1]*;
+
+ col witness x;
+ col witness y;
+ col witness z;
+ col witness w;
+ col witness r;
+ col witness m;
+ col witness n;
+}
+
+machine Add with
+ degree: 32,
+ latch: latch
+{
+ // operation name, with column names as inputs and outputs
+ operation add a, b -> c;
+
+ col fixed latch = [1]*;
+
+ col witness a;
+ col witness b;
+ col witness c;
+
+ // constraint "implementing" the operation
+ c = a + b;
+}
+
+If a boolean flag is given, the link is only active in rows where the flag evaluates to 1
.
+Whenever a link is active, the columns mapped as inputs and outputs are constrained by the operation implementation.
+The following example demonstrates how to use links with flags.
col fixed odd_row = [0,1]*;
+
+link if odd_row => z = submachine.foo(x, y); // active on odd rows only
+link if (1 - odd_row) => z = submachine.bar(x, y); // active on even rows only
+
+
+ Machines are the first main concept in powdr-asm. They can currently be of two types: virtual or constrained.
+Dynamic machines are defined by:
+An example of a simple dynamic machine is the following:
+machine HelloWorld with degree: 8 {
+ reg pc[@pc];
+ reg X[<=];
+ reg Y[<=];
+ reg A;
+
+ instr incr X -> Y {
+ Y = X + 1
+ }
+
+ instr decr X -> Y {
+ Y = X - 1
+ }
+
+ instr assert_zero X {
+ X = 0
+ }
+
+ function main {
+ // assign the first prover input to A
+ A <=X= ${ std::prelude::Query::Input(0, 1) };
+
+ // increment A
+ A <== incr(A);
+
+ // decrement A
+ A <== decr(A);
+
+ // assert that A is zero
+ assert_zero A;
+
+ return;
+ }
+}
+Constrained machines are a lower-level type of machine. They do not have registers, and instead rely on simple committed and fixed columns. They are used to implement hand-optimized computation.
+They are defined by:
+operation_identifier
column, used to make constraints conditional over which function is called. It can be omitted with _
if the machine has at most one operation.latch
column, used to identify rows at which the machine can be accessed from the outside (where the inputs and outputs are passed). It can be omitted if the machine has no operations.An example of a simple constrained machine is the following:
+machine SimpleStatic with
+ degree: 8,
+ latch: latch,
+ operation_id: operation_id
+{
+ operation power_4<0> x -> y;
+
+ col fixed operation_id = [0]*;
+ col fixed latch = [0, 0, 0, 1]*;
+ col witness x;
+ col witness y;
+
+ // initialise y to x at the beginning of each block
+ latch * (y' - x') = 0;
+ // x is unconstrained at the beginning of the block
+
+ // x is constant within a block
+ (1 - latch) * (x' - x) = 0;
+ // y is multiplied by x at each row
+ (1 - latch) * (y' - x * y) = 0;
+}
+For more details on the powdr-pil statements, check out the pil section of this book. Note that the parameters of the operation are columns defined in powdr-pil statements.
+Machines can have submachines which they access by defining external instructions or links. They are declared as follows:
+machine MySubmachine {
+ ...
+}
+
+machine MyMachine {
+ MySubmachine my_submachine;
+}
+Machines can also receive submachines as construction parameters. +A machine passed in as an argument can be accessed in the same way as locally declared submachines:
+machine MachineWithParam(subm: MySubmachine) {
+ // `subm` can be accessed as a submachine
+ ...
+}
+
+machine MyMachine {
+ MySubmachine my_submachine;
+ // `my_submachine` is passed to `another_submachine` as a construction argument
+ MachineWithParam another_submachine(my_submachine);
+}
+
+ powdr exposes a module system to help organise and reuse code.
+use my_module::Other as LocalOther;
+
+// we can define a module at `./submodule.asm`
+mod submodule;
+
+// we can define a module at `./submodule_in_folder/mod.asm`
+mod submodule_in_folder;
+
+use submodule::Other as SubmoduleOther;
+use submodule_in_folder::Other as FolderSubmoduleOther;
+
+let zero: int = 0;
+
+// we can also define modules inline
+mod utils {
+ // Each module has a fresh symbol list. Every external symbol needs to be imported,
+ // even from the parent module.
+ use super::zero;
+
+ let one = zero + 1;
+}
+
+machine Main with degree: 8 {
+ // use a machine from another module by relative path
+ my_module::Other a;
+
+ // use a machine from another module using a local binding
+ LocalOther b;
+
+ // use a machine from another module defined in a different file
+ SubmoduleOther c;
+
+ // use a machine from another module defined in a different directory
+ FolderSubmoduleOther d;
+
+ reg pc[@pc];
+
+ instr nothing link => a.nothing();
+ instr also_nothing link => b.nothing();
+ instr still_nothing link => c.nothing();
+ instr nothing_again link => d.nothing();
+
+ function main {
+ nothing;
+ also_nothing;
+ still_nothing;
+ nothing_again;
+ return;
+ }
+}
+
+mod my_module {
+ machine Other with
+ degree: 8,
+ latch: latch,
+ operation_id: operation_id
+ {
+ operation nothing<0>;
+
+ col fixed latch = [1]*;
+ col fixed operation_id = [0]*;
+ }
+}
+Note that a module can't be called std
, as this name is reserved for the powdr standard library.
Similar to Rust, any reference that cannot be resolved is looked up once more in std::prelude
.
+This module exposes basic types and values such as Option
, true
and false
.
+This means that you can use Option
anywhere without prefix.
Operations enable a constrained machine to expose behavior to the outside. +If a machine has a single operation, it can simply be declared with its name and parameters:
+machine Add with
+ degree: 32,
+ latch: latch
+{
+ // operation name, with column names as inputs and outputs
+ operation add a, b -> c;
+
+ col fixed latch = [1]*;
+
+ col witness a;
+ col witness b;
+ col witness c;
+
+ // constraint "implementing" the operation
+ c = a + b;
+}
+The parameters of the operation (inputs and outputs) must be columns declared in the machine.
+If a machine exposes more than one operation, the machine itself needs an operation id column (op_id
in the following).
+Then, each operation needs to be declared with its own unique operation id:
// machine declaration must include an operation id column name
+machine AddSub with
+ degree: 32,
+ latch: latch,
+ operation_id: op_id
+{
+ // each operation has its own unique operation id
+ operation add<0> a, b -> c;
+ operation sub<1> a, b -> c;
+
+ col fixed latch = [1]*;
+ // it also needs to be declared as a column
+ col witness op_id;
+
+ col witness a;
+ col witness b;
+ col witness c;
+
+ // constraint "implementing" both operations, depending on `op_id`
+ c = (1 - op_id) * (a + b) + op_id * (a - b);
+}
+The actual behavior of an operation is defined by the machine constraints on the columns used as inputs and outputs.
+ +Registers are central to a machine. powdr supports a few types of registers:
+Each machine can have at most one program counter. In the absence of a program counter, the machine is considered static, and no other register can be declared. The program counter is defined as follows:
+reg pc[@pc]
+At each step execution step, the program counter points to the function line to execute. +The program counter behaves like a write register, with the exception that its value is incremented by default after each step.
+Write registers are the default type for registers. They are declared as follows:
+reg A;
+They hold a field element, are initialized as 0 at the beginning of a function and keep their value by default. They can be read from and written to.
+// write to A
+A <=X= 1;
+// A is 1
+
+// read from A
+B <=X= A;
+// A is still 1
+Assignment registers are transient to an execution step: their value is not persisted across steps. They are required in order to pass inputs and receive outputs from instructions, as well as in assignments.
+For example, if we want to assert that write register A
is 0
, we can use the following instruction:
reg pc[@pc];
+reg A;
+
+instr assert_A_is_zero {
+ A = 0
+}
+
+function main {
+ assert_A_is_zero;
+ return;
+}
+However, if we want the instruction to accept any write register as input, we use an assignment register.
+reg pc[@pc];
+reg X[<=];
+reg A;
+
+instr assert_zero X {
+ X = 0
+}
+
+function main {
+ assert_zero A;
+ return;
+}
+Read-only registers are used for function inputs. However, powdr creates them automatically based on functions arguments, so that they do not need to be declared explicitly.
+++ +Read-only registers are only mentioned for completeness here and are currently only used inside the compiler. We advise against using them.
+
powdr supports the eSTARK proof system with the Goldilocks field, +implemented by the starky library from eigen-zkvm.
+ +powdr supports the PSE fork of halo2 with the bn254 field.
+ +powdr aims to have full flexibility when it comes to generating proofs and comes with a few built-in backends to get started with zkVMs.
+ +powdr
This document contains the help content for the powdr
command-line program.
Command Overview:
+powdr
↴powdr pil
↴powdr prove
↴powdr verify
↴powdr verification-key
↴powdr export-verifier
↴powdr setup
↴powdr reformat
↴powdr optimize-pil
↴powdr test
↴powdr
powdr CLI to compile powdr-asm and powdr-pil programs
+Usage: powdr [OPTIONS] [COMMAND]
pil
— Runs compilation and witness generation for .pil and .asm files. First converts .asm files to .pil, if needed. Then converts the .pil file to json and generates fixed and witness column data filesprove
—verify
—verification-key
—export-verifier
—setup
—reformat
— Parses and prints the PIL file on stdoutoptimize-pil
— Optimizes the PIL file and outputs it on stdouttest
— Executes all functions starting with test_
in every module called test
(or sub-module thereof) starting from the given module--log-level <LOG_LEVEL>
— Set log filter value [ off, error, warn, info, debug, trace ]
Default value: INFO
powdr pil
Runs compilation and witness generation for .pil and .asm files. First converts .asm files to .pil, if needed. Then converts the .pil file to json and generates fixed and witness column data files
+Usage: powdr pil [OPTIONS] <FILE>
<FILE>
— Input file--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-o
, --output-directory <OUTPUT_DIRECTORY>
— Output directory for the PIL file, json file and fixed and witness column data
Default value: .
-w
, --witness-values <WITNESS_VALUES>
— Path to a CSV file containing externally computed witness values
-i
, --inputs <INPUTS>
— Comma-separated list of free inputs (numbers)
Default value: ``
+-f
, --force
— Force overwriting of PIL output file
Default value: false
--pilo
— Whether to output the pilo PIL object
Default value: false
-p
, --prove-with <PROVE_WITH>
— Generate a proof with a given backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
--params <PARAMS>
— File containing previously generated setup parameters
--backend-options <BACKEND_OPTIONS>
— Backend options. Halo2: "poseidon", "snark_single" or "snark_aggr". EStark and PilStarkCLI: "stark_gl", "stark_bn" or "snark_bn"
--linker-mode <LINKER_MODE>
— Linker mode, deciding how to reduce links to constraints
Possible values: native
, bus
--degree-mode <DEGREE_MODE>
— Degree mode, deciding whether to use a single monolithic table or a set of dynamically sized tables
--export-witness-csv
— Generate a CSV file containing the witness column values
Default value: false
--export-all-columns-csv
— Generate a CSV file containing all fixed and witness column values. Useful for debugging purposes
Default value: false
--csv-mode <CSV_MODE>
— How to render field elements in the csv file
Default value: hex
Possible values: i
, ui
, hex
powdr prove
Usage: powdr prove [OPTIONS] --backend <BACKEND> <FILE>
<FILE>
— Input PIL file-d
, --dir <DIR>
— Directory to find the committed and fixed values
Default value: .
--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-b
, --backend <BACKEND>
— Generate a proof with a given backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
--backend-options <BACKEND_OPTIONS>
— Backend options. Halo2: "poseidon", "snark_single" or "snark_aggr". EStark and PilStarkCLI: "stark_gl", "stark_bn" or "snark_bn"
--proof <PROOF>
— File containing previously generated proof for aggregation
--vkey <VKEY>
— File containing previously generated verification key
--vkey-app <VKEY_APP>
— File containing the verification key of a proof to be verified recursively
--params <PARAMS>
— File containing previously generated setup parameters
powdr verify
Usage: powdr verify [OPTIONS] --backend <BACKEND> --proof <PROOF> --vkey <VKEY> <FILE>
<FILE>
— Input PIL file-d
, --dir <DIR>
— Directory to find the fixed values
Default value: .
--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-b
, --backend <BACKEND>
— Generate a proof with a given backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
--backend-options <BACKEND_OPTIONS>
— Backend options. Halo2: "poseidon", "snark_single" or "snark_aggr". EStark and PilStarkCLI: "stark_gl", "stark_bn" or "snark_bn"
--proof <PROOF>
— File containing the proof
--publics <PUBLICS>
— Comma-separated list of public inputs (numbers)
Default value: ``
+--vkey <VKEY>
— File containing the verification key
--params <PARAMS>
— File containing the params
powdr verification-key
Usage: powdr verification-key [OPTIONS] --backend <BACKEND> <FILE>
<FILE>
— Input PIL file-d
, --dir <DIR>
— Directory to find the fixed values
Default value: .
--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-b
, --backend <BACKEND>
— Chosen backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
--backend-options <BACKEND_OPTIONS>
— Backend options. Halo2: "poseidon", "snark_single" or "snark_aggr". EStark and PilStarkCLI: "stark_gl", "stark_bn" or "snark_bn"
--params <PARAMS>
— File containing previously generated setup parameters. This will be needed for SNARK verification keys but not for STARK
--vkey-app <VKEY_APP>
— File containing the verification key of a proof to be verified recursively
powdr export-verifier
Usage: powdr export-verifier [OPTIONS] --backend <BACKEND> <FILE>
<FILE>
— Input PIL file-d
, --dir <DIR>
— Directory to find the fixed values
Default value: .
--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-b
, --backend <BACKEND>
— Chosen backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
--backend-options <BACKEND_OPTIONS>
— Backend options. Halo2: "poseidon", "snark_single" or "snark_aggr". EStark and PilStarkCLI: "stark_gl", "stark_bn" or "snark_bn"
--params <PARAMS>
— File containing previously generated setup parameters. This will be needed for SNARK verification keys but not for STARK
--vkey <VKEY>
— File containing previously generated verification key
powdr setup
Usage: powdr setup [OPTIONS] --backend <BACKEND> <SIZE>
<SIZE>
— Size of the parameters-d
, --dir <DIR>
— Directory to output the generated parameters
Default value: .
--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
-b
, --backend <BACKEND>
— Generate a proof with a given backend
Possible values: mock
, halo2
, halo2-composite
, halo2-mock
, halo2-mock-composite
, plonky3
, plonky3-composite
powdr reformat
Parses and prints the PIL file on stdout
+Usage: powdr reformat <FILE>
<FILE>
— Input filepowdr optimize-pil
Optimizes the PIL file and outputs it on stdout
+Usage: powdr optimize-pil [OPTIONS] <FILE>
<FILE>
— Input file--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
powdr test
Executes all functions starting with test_
in every module called test
(or sub-module thereof) starting from the given module
Usage: powdr test [OPTIONS] <FILE>
<FILE>
— Input file--field <FIELD>
— The field to use
Default value: gl
Possible values: bb
, kb
, m31
, gl
, bn254
+This document was generated automatically by
+clap-markdown
.
+
Besides the Hello World examples, you are encouraged to check out these more +complex and real-world use cases:
+ + +While any frontend VM can be implemented in powdr-asm, powdr comes with several frontends for popular instruction set architectures.
+ +A RISCV frontend for powdr is already available.
+# Install the riscv target for the rust compiler
+rustup target add riscv32imac-unknown-none-elf
+# Run the powdr-rs compiler. It will generate files in ./output/
+powdr-rs compile riscv/tests/riscv_data/sum -o output
+# Run powdr to compile powdr-asm to powdr-PIL and generate the witness
+# -i specifies the prover witness input (see below)
+powdr pil output/sum.asm -o output -f -i 10,2,4,6
+
+The example Rust code verifies that a supplied list of integers sums up to a specified value.
+#![no_main]
+#![no_std]
+
+extern crate alloc;
+use alloc::vec::Vec;
+
+use powdr_riscv_runtime::io::read_u32;
+
+#[no_mangle]
+pub fn main() {
+ // This is the sum claimed by the prover.
+ let proposed_sum = read_u32(0);
+ // The number of integers we want to sum.
+ let len = read_u32(1) as usize;
+ // Read the numbers from the prover and store them
+ // in a vector.
+ let data: Vec<_> = (2..(len + 2)).map(|idx| read_u32(idx as u32)).collect();
+ // Compute the sum.
+ let sum: u32 = data.iter().sum();
+ // Check that our sum matches the prover's.
+ assert_eq!(sum, proposed_sum);
+}
+The function read_u32
reads a number from the list supplied with -i
.
This is just a first mechanism to provide access to the outside world.
+The plan is to be able to call arbitrary user-defined ffi
functions that will translate to prover queries,
+and can then ask for e.g. the value of a storage slot at a certain address or the root hash of a Merkle tree.
ZK-continuations can be used to make proofs for unbounded execution traces of
+Rust programs, where the trace is split into many different chunks
. A proof
+is computed for each chunk, and all proofs can be combined with
+recursion/aggregation until a single proof remains.
For the details of how memory is handled in the ZK-continuations case please +see this.
+powdr-rs
has experimental support to ZK-continuations, which can be used as follows.
Let's use as example test many chunks
from the riscv
crate:
#![no_main]
+#![no_std]
+
+extern crate alloc;
+extern crate powdr_riscv_runtime;
+use alloc::vec::Vec;
+
+#[no_mangle]
+pub fn main() {
+ let mut foo = Vec::new();
+ foo.push(1);
+
+ for _ in 0..100 {
+ foo.push(foo.iter().sum());
+ }
+
+ // Compute some fibonacci numbers
+ // -> Does not access memory but also does not get optimized out...
+ let mut a = 1;
+ let mut b = 1;
+ for _ in 0..150000 {
+ let tmp = a + b;
+ a = b;
+ b = tmp;
+ }
+ // Don't optimize me away :/
+ assert!(a > 0);
+}
+First we need to compile the Rust code to powdr-asm:
+powdr-rs compile riscv/tests/riscv_data/many_chunks
+
+Now we can use powdr's RISCV executor to estimate how many cycles are needed:
+powdr-rs execute many_chunks.asm
+
+...
+Execution trace length: 750329
+
+By default, powdr-RISCV uses chunks of length 2^18. That means we will need at +least 3 chunks.
+For the continuations case, the compiled assembly code looks different because
+of the external memory commitments, so we need to recompile using the
+--continuations
flag:
powdr-rs compile riscv/tests/riscv_data/many_chunks --continuations
+
+We can now execute the program with continuations enabled:
+powdr-rs -- execute many_chunks.asm --continuations
+
+The output now is longer:
+Running chunk 0...
+Building bootloader inputs for chunk 0...
+26 unique memory accesses over 2 accessed pages: {31, 32}
+Estimating the shutdown routine to use 1362 rows.
+Bootloader inputs length: 1285
+Simulating chunk execution...
+Initial memory root hash: 44cd91c12033ad4c6a6b19793b73f1a66d99a0e0bf63494c12ceb1f451ec9452
+Final memory root hash: e6a6fd63d864c734d6a4df4988f733738790262af9c7b014a1c036679baf1125
+Chunk trace length: 260782
+Validating chunk...
+Bootloader used 3134 rows.
+ => 257648 / 262144 (98%) of rows are used for the actual computation!
+Proved 257647 rows.
+
+Running chunk 1...
+Building bootloader inputs for chunk 1...
+0 unique memory accesses over 0 accessed pages: {}
+Estimating the shutdown routine to use 42 rows.
+Bootloader inputs length: 83
+Simulating chunk execution...
+Initial memory root hash: e6a6fd63d864c734d6a4df4988f733738790262af9c7b014a1c036679baf1125
+Final memory root hash: e6a6fd63d864c734d6a4df4988f733738790262af9c7b014a1c036679baf1125
+Chunk trace length: 262102
+Validating chunk...
+Bootloader used 62 rows.
+ => 262040 / 262144 (99%) of rows are used for the actual computation!
+Proved 262039 rows.
+
+Running chunk 2...
+Building bootloader inputs for chunk 2...
+2 unique memory accesses over 1 accessed pages: {31}
+Estimating the shutdown routine to use 702 rows.
+Bootloader inputs length: 684
+Simulating chunk execution...
+Initial memory root hash: e6a6fd63d864c734d6a4df4988f733738790262af9c7b014a1c036679baf1125
+Final memory root hash: e6a6fd63d864c734d6a4df4988f733738790262af9c7b014a1c036679baf1125
+Chunk trace length: 232057
+Validating chunk...
+Bootloader used 1610 rows.
+ => 259832 / 262144 (99%) of rows are used for the actual computation!
+Done!
+
+The step above is informational, but for the proofs we also need the full witness:
+powdr-rs execute many_chunks.asm --witness
+
+The witnesses are written in ./chunk_0/commits.bin
, ./chunk_1/commits.bin
+and ./chunk_2/commits.bin
.
Now that we have the witnesses for all chunks we can use powdr
(instead of
+powdr-rs
) to compute proofs for each chunk:
powdr prove many_chunks.asm -d chunk_0 --backend estark-dump
+powdr prove many_chunks.asm -d chunk_1 --backend estark-dump
+powdr prove many_chunks.asm -d chunk_2 --backend estark-dump
+
+These proofs are mock proofs for the sake of the example, but any backend +should work here.
+After generating real proofs, each specific proof system can be used for the +recursion/aggregation parts. A follow-up tutorial on that is coming soon.
+ +Let's write a minimal VM and generate proofs!
+machine HelloWorld with degree: 8 {
+ reg pc[@pc];
+ reg X[<=];
+ reg Y[<=];
+ reg A;
+
+ instr incr X -> Y {
+ Y = X + 1
+ }
+
+ instr decr X -> Y {
+ Y = X - 1
+ }
+
+ instr assert_zero X {
+ X = 0
+ }
+
+ function main {
+ // assign the first prover input to A
+ A <=X= ${ std::prelude::Query::Input(0, 1) };
+
+ // increment A
+ A <== incr(A);
+
+ // decrement A
+ A <== decr(A);
+
+ // assert that A is zero
+ assert_zero A;
+
+ return;
+ }
+}
+
+Let's go through different possible usages of powdr, starting with using the powdr CLI.
+ +Let's generate a proof of execution for the valid prover input 0
(since 0 + 1 - 1 == 0
)
powdr pil test_data/asm/book/hello_world.asm --field bn254 --inputs 1 --prove-with halo2
+
+We observe that several artifacts are created in the current directory:
+hello_world.pil
: the compiled PIL file.hello_world_opt.pil
: the optimized PIL file.hello_world_constants.bin
: the computed fixed columns which only have to be computed once per PIL file.hello_world_commits.bin
: the computed witness which needs to be computed for each proof.hello_world_proof.bin
: the ZK proof!++Note that the output directory can be specified with option
+-o|--output
, and.
is used by default.
Now let's try for the invalid input 1
:
powdr pil test_data/asm/book/hello_world.asm --field bn254 --inputs 2 --prove-with halo2
+
+In this case witness generation fails, and no proof is created.
+The example above omits some important steps in proof generation: Setup and
+Verification key generation. Some proof systems such as Halo2 , require a Setup to be performed before the proof. Such Setup can be
+specific to the program or universal, where its artifact is a binary usually
+called parameters
or params
. STARKs do not require a Setup.
Another step required before the proof is computed is key generation. A
+proving key
and a verification key
are generated taking into account the
+constraints and potentially Setup parameters. The proving key
is used by the
+prover to generate the proof, and the verification key is used by the verifier
+to verify such a proof. A single verification key can be used to verify any
+number of different proofs for a given program.
Therefore, when computing a proof, it is important that the Setup parameters +and the verification key are available as artifacts.
+Below we reproduce the same proof as in the example above, keeping the +artifacts needed for verification:
+First we run the Setup, where the number given must match the degree
of our
+source file (degree 8
). The command below generates file params.bin
.
powdr setup 8 --backend halo2 --field bn254
+
+We can now compute the verification key, output in vkey.bin
:
powdr verification-key test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --params "params.bin"
+
+++The command above can read previously generated constants from the directory +specified via
+-d|--dir
, where.
is used by default. If the constants are not present +it computes them before generating the verification key.
The next command compiles and optimizes the given source, generating the file
+hello_world_opt.pil
. It also computes both the fixed data and the witness
+needed for the proof, stored respectively in hello_world_constants.bin
and
+hello_world_commits.bin
.
powdr pil test_data/asm/book/hello_world.asm --field bn254 --force --inputs 0
+
+We can now generate the proof:
+powdr prove test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --params "params.bin" --vkey "vkey.bin"
+
+The proof can be verified by anyone via:
+powdr verify test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --vkey "vkey.bin" --params "params.bin" --proof "hello_world_proof.bin"
+
+++Note that CLI proof verification works analogously for eSTARK, without the setup step and using the Goldilocks field instead of Bn254.
+
Another aspect that was omitted in this example is the fact that this proof +uses a Poseidon transcript and cannot be verified in a cheap way on Ethereum, +even though we can verify it efficiently via powdr. +There are two ways to enable verification on Ethereum:
+This example is a variation of the previous Hello World, +targeting verification on Ethereum. In this example we will cover how to +generate proofs directly using a Keccak transcript instead of the Poseidon +transcript of the previous example, which will enable us to verify proofs +onchain. There are almost no differences from the CLI perspective.
+Since the following command creates a proof, a Solidity verifier, and verifies
+the proof on the EVM, we need to have solc
available in the system. One easy
+way to install it is by using svm.
powdr pil test_data/asm/book/hello_world.asm --field bn254 --inputs 0 --prove-with halo2 --backend-options "snark_single" -f --params params.bin
+
+The extra parameter --backend-options snark_single
tells powdr to produce a single
+SNARK that uses Keccak. The option -f
forces overwriting the compiled files
+that have been generated before, which is useful if you are running the examples
+in the same directory.
When the proof is generated, it is verified on a simulated EVM transaction as well!
+Verifying SNARK in the EVM...
+
+We can observe again that a proof was created at hello_world_proof.bin
.
Now we can generate a Solidity verifier, using the same setup (params
) as the
+previous example:
powdr export-verifier test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "snark_single" --params params.bin
+
+A Solidity file verifier.sol
is generated. You can verify the same proof that
+we just generated by passing it to the contract together with the public inputs
+(which we have not used so far).
++ +Note that the more complex your VM is, the larger the verifier contract will be. In some cases, it might exceed Ethereum's contract size limit. You can mitigate that by using proof recursion on proofs that use Poseidon transcripts. See the next section for details.
+
As noted in the previous section, complex VMs can lead to large +Solidity verifiers that exceed the contract size limit on Ethereum. One +solution to that problem is to create proofs using the Poseidon transcript, as +we did in the first example, and then use proof +recursion to create a proof that we know we will be able to verify on Ethereum.
+A recursive SNARK works by generating a proof for a circuit that verifies
+another proof. The circuit we are using here to prove our initial proof
+recursively is PSE's snark-verifier. This circuit is large enough
+to be able to prove complex programs that were proven initial with the Poseidon
+transcript, like our first example. Because of that our aggregation setup
+params
and verification key
are going to be larger than before and take
+longer to compute. The good news are that (i) we can use a pre-computed setup
+from a previous ceremony, and (ii) the verification key only has to be computed
+once per program.
First, we need a setup of "size" 2^22. This is the maximum execution trace length of the recursion circuit.
+You can generate it using the command line below, or download a pre-computed +one here.
+This will take a couple minutes if you decide to compute it yourself:
+powdr setup 4194304 --backend halo2 --field bn254
+
+We can re-use the new large params.bin
for both initial and recursive proofs.
+Let's start by re-computing our Poseidon proof like in the first example,
+but using our new setup file:
powdr pil test_data/asm/book/hello_world.asm --field bn254 --inputs 0 --prove-with halo2 --backend-options "poseidon" -f --params params.bin
+
+This generates the initial proof hello_world_proof.bin
.
+We'll also need a verification key:
powdr verification-key test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "poseidon" --params params.bin
+
+Let's verify the proof with our fresh verification key to make sure we're on the right track:
+powdr verify test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "poseidon" --params params.bin --vkey vkey.bin --proof hello_world_proof.bin
+
+In order to avoid confusion between the application's artifacts +that we've just generated and the recursion one we're going to generate +later, let's rename them:
+mv vkey.bin vkey_app.bin
+mv hello_world_proof.bin hello_world_proof_app.bin
+
+We can now generate a verification key for the Halo2 circuit that verifies our proofs recursively. This might take up to a minute.
+powdr verification-key test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "snark_aggr" --params params.bin --vkey-app vkey_app.bin
+
+++Note that this verification key can only be used to verify recursive proofs that verify other proofs using the application's key
+vkey_app.bin
.
We can now generate the recursive proof (this typically takes a few minutes and uses around 28gb RAM):
+powdr prove test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "snark_aggr" --params params.bin --vkey vkey.bin --vkey-app vkey_app.bin --proof hello_world_proof_app.bin
+
+We have a proof! Note that it contains two fields, proof
and
+publics
. The proof
object contains the binary encoding of the proof
+points, and the publics
object contains the public accumulator limbs that we
+need in order to verify the recursive proof.
We can now verify the proof, using the publics
object as input (your numbers will be different):
powdr verify test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "snark_aggr" --params params.bin --vkey vkey.bin --proof hello_world_proof_aggr.bin --publics "269487626280642378794,9378970522278219882,62304027188881225691,811176493438944,234778270138968319485,3212529982775999134,171155758373806079356,207910400337448,188563849779606300850,155626297629081952942,194348356185923309508,433061951018270,34598221006207900280,283775241405787955338,79508596887913496910,354189825580534"
+
+Since the goal of the recursive proof was to be able to verify it on Ethereum, let's do that!
+powdr export-verifier test_data/asm/book/hello_world.asm --field bn254 --backend halo2 --backend-options "snark_aggr" --params params.bin --vkey vkey.bin
+
+A Solidity verifier is created in verifier.sol
. The contract expects an array of the accumulators' limbs followed by a tightly packed proof, where each accumulator limb uses 32 bytes, and there is no function signature.
Note that this verifier can be used to verify any recursive proof that verifies exactly one Poseidon proof of the given circuit.
+ +powdrVM is a versatile zkVM built on the powdr stack, supporting std Rust, +multiple proof systems, and zk-continuations for unbounded execution. +Designed for flexibility, powdrVM enables efficient proof generation for complex applications with minimal setup.
+powdrVM is free and open source. +You can find the source code on GitHub. +Issues and feature requests can be posted on the GitHub issue tracker.
+The powdr source and documentation are released under the MIT License.
+ +