OP Succinct
Documentation for OP Succinct users and developers.
OP Succinct transforms any OP Stack rollup into a fully type-1 ZK rollup using SP1. OP Succinct provides:
- 1 hour finality secured by ZKPs, a dramatic improvement over the 7-day withdrawal window of standard OP Stack rollups.
- Unlimited customization for rollup modifications in pure Rust and easy maintainability.
- Cost-effective proving with an average cost of proving only fractions of cent per transaction (with an expected 5-10x improvement by EOY), thanks to SP1's blazing fast performance.
All of this has been possible thanks to close collaboration with the core team at OP Labs.
Reach out today if you want a Type-1 zkEVM rollup powered by SP1 (either a new rollup or a conversion from an optimistic rollup).
Architecture
Prerequisites
Every OP Stack rollup is composed of four main components.
op-geth
: Takes transactions from users and uses them to generate blocks and execute blocks.op-batcher
: Batches transactions from users and submits them to the L1.op-node
: Reads batch data from L1 and uses it to driveop-geth
in non-sequencer mode to perform state transitions.op-proposer
: Posts an output root to L1 at regular intervals, which captures the L2 state so withdrawals can be processed.
You can read more about the components in the OP Stack Specification.
OP Succinct
OP Succinct is a lightweight upgrade to the OP Stack that allows the chain to progress only with ZK-proven blocks, while keeping the other components (op-geth
, op-batcher
, and op-node
) unchanged. Deploying OP Succinct requires deploying one contract, OPSuccinctL2OutputOracle
, and spinning up a lightweight modification to the op-proposer
that requests proofs to be submitted to the L1 contract.
Here is a high-level overview of the new components that are introduced in OP Succinct:
- Range Program. A program that derives and executes batches of blocks. The program is written in Rust and designed to be executed inside the zkVM.
- Aggregation Program. Aggregates proofs of range programs to reduce on-chain verification costs. This program is also written in Rust and designed to be executed inside the zkVM.
- OP Succinct L2 Output Oracle. A solidity smart contract that contains an array of L2 state outputs, where each output is a commit to the state of the L2 chain. This contract already exists in Optimism's original system but is modified to verify proofs as the authentication mechanism.
- OP Succinct Proposer. Observes the posted batches on L1 and controls the proving of the range and aggregation programs.
Getting Started
In this section, we'll guide you through upgrading an existing OP Stack chain to a fully type-1 ZK rollup using SP1 and OP Succinct.
The steps are the following:
- Deploy the OP Succinct L2 Output Oracle Contract. This contract is a modified version of the existing
L2OutputOracle
contract that uses SP1 to verify the execution and derivation of the L2 state transitions. - Start the OP Succinct Proposer. This service is a modified version of the existing
op-proposer
service. It posts output roots to the L2 Output Oracle contract at regular intervals by orchestrating the generation and aggregation of proofs. - Update your OP Stack Chain Configuration. You will need to update your configuration to update the
L2OutputOracle
contract to the new implementation using yourADMIN
key.
Prerequisites
Requirements
You must have the following installed:
You must have the following RPCs available:
- L1 Archive Node
- L1 Consensus (Beacon) Node
- L2 Execution Node (
op-geth
) - L2 Rollup Node (
op-node
)
The following RPC endpoints must be accessible:
- L1 Archive Node.
debug_getRawHeader
,debug_getRawReceipts
,debug_getRawBlock
- L2 Execution Node (
op-geth
): Archive node with hash state scheme.debug_getRawHeader
,debug_getRawTransaction
,debug_getRawBlock
,debug_getExecutionWitness
,debug_dbGet
- L2 Optimism Node (
op-node
)optimism_outputAtBlock
,optimism_rollupConfig
,optimism_syncStatus
,optimism_safeHeadAtL1Block
.
If you do not have access to an L2 OP Geth node + rollup node for your OP Stack chain, you can follow the L2 node setup instructions to spin them up.
OP Stack Chain
The rest of this section will assume you have an existing OP Stack Chain running. If you do not have one, there are two ways you can get started:
- Self-hosted. If you want to run your own OP Stack Chain, please follow Optimism's tutorial first.
- Rollup-as-a-service providers. You can also use an existing RaaS provider, such as Conduit, Caldera, Alchemy, AltLayer or Gelato. Contact them to upgrade your rollup to use OP Succinct.
Deploy OP Succinct L2 Output Oracle
The first step in deploying OP Succinct is to deploy the OPSuccinctL2OutputOracle
smart contract that will verify SP1 proofs of the Optimism state transition function which verify the latest state root for the OP Stack rollup.
Overview
The OPSuccinctL2OutputOracle
contract is a modification of the L2OutputOracle
contract that is used to verify the state roots of the OP Stack rollup.
Modifications to L2OutputOracle
The original L2OutputOracle
contract can be found here.
The changes introduced in the OPSuccinctL2OutputOracle
contract are:
- The
submissionInterval
parameter is now the minimum interval in L2 blocks at which checkpoints must be submitted. An aggregation proof can be posted after this interval has passed. - The addition of the
aggregationVkey
,rangeVkeyCommitment
,verifierGateway
,startingOutputRoot
, androllupConfigHash
parameters.startingOutputRoot
is used for initalizing the contract from an empty state, becauseop-succinct
requires a starting output root from which to prove the next state root. The other parameters are used for verifying the proofs posted to the contract. - The addition of
historicBlockHashes
to store the L1 block hashes which theop-succinct
proofs are anchored to. Whenever a proof is posted, the merkle proof verification will use these L1 block hashes to verify the state of the L2 which is posted as blobs or calldata to the L1. - The new
checkpointBlockHash
function which checkpoints the L1 block hash at a given L1 block number using theblockhash
function. - The
proposeL2Output
function now takes an additional_proof
parameter, which is the proof that is posted to the contract, and removes the unnecessary_l1BlockHash
parameter (which is redundant given thehistoricBlockHashes
mapping). This function also verifies the proof using theISP1VerifierGateway
contract.
Deployment
1) Clone op-succinct
repo:
git clone https://github.com/succinctlabs/op-succinct.git
cd op-succinct
2) Set environment variables:
In the root directory, create a file called .env
(mirroring .env.example
) and set the following environment variables:
Parameter | Description |
---|---|
L1_RPC | L1 Archive Node. |
L1_BEACON_RPC | L1 Consensus (Beacon) Node. |
L2_RPC | L2 Execution Node (op-geth ). |
L2_NODE_RPC | L2 Rollup Node (op-node ). |
PROVER_NETWORK_RPC | Default: rpc.succinct.xyz . |
SP1_PRIVATE_KEY | Key for the Succinct Prover Network. Get access here. |
SP1_PROVER | Default: network . Set to network to use the Succinct Prover Network. |
PRIVATE_KEY | Private key for the account that will be deploying the contract and posting output roots to L1. |
3) Navigate to the contracts directory:
cd contracts
4) Set Deployment Parameters
Inside the contracts
folder there is a file called opsuccinctl2ooconfig.json
that contains the parameters for the deployment. The parameters are automatically set based on your RPC's and the owner of your contract is determined by the private key you set in the .env
file.
Optional Advanced Parameters
Advanced users can set parameters manually in opsuccinctl2ooconfig.json
, but the defaults are recommended. Skip this section if you want to use the defaults.
Parameter | Description |
---|---|
owner | Ethereum address of the contract owner. Default: The address of the account associated with PRIVATE_KEY . |
proposer | Ethereum address authorized to submit proofs. Default: The address of the account associated with PRIVATE_KEY . |
challenger | Ethereum address authorized to dispute proofs. Default: address(0) , no one can dispute proofs. |
finalizationPeriod | The time period (in seconds) after which a proposed output becomes finalized and withdrawals can be processed. Default: 0 . |
5) Deploy the OPSuccinctL2OutputOracle
contract:
Run the following command to deploy the OPSuccinctL2OutputOracle
contract to the L1 chain:
forge script script/OPSuccinctDeployer.s.sol:OPSuccinctDeployer \
--rpc-url $L1_RPC \
--private-key $PRIVATE_KEY \
--ffi \
--verify \
--verifier etherscan \
--etherscan-api-key $ETHERSCAN_API_KEY \
--broadcast
If successful, you should see the following output:
Script ran successfully.
== Return ==
0: address 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95
...
ONCHAIN EXECUTION COMPLETE & SUCCESSFUL.
##
Start verification for (2) contracts
Start verifying contract `0x9b520F7d8031d45Eb8A1D9fE911038576931ab95` deployed on sepolia
Submitting verification for [lib/optimism/packages/contracts-bedrock/src/universal/Proxy.sol:Proxy] 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95.
In these deployment logs, 0x9b520F7d8031d45Eb8A1D9fE911038576931ab95
is the address of the Proxy contract for the OPSuccinctL2OutputOracle
. This deployed Proxy contract will keep track of the state roots of the OP Stack chain.
Configure Environment
To use a configurable environment, pass the ENV_FILE
flag with the path to your .env
file. By default this is the .env
in your root directory.
ENV_FILE=.env.new forge script script/OPSuccinctDeployer.s.sol:OPSuccinctDeployer ...
6) Add Proxy Address to .env
Add the address for the OPSuccinctL2OutputOracle
proxy contract to the .env
file in the root directory.
Parameter | Description |
---|---|
L2OO_ADDRESS | The address of the Proxy contract for the OPSuccinctL2OutputOracle . |
Proposer
Now that you have deployed the OPSuccinctL2OutputOracle
contract, you can start the op-succinct
service which replaces the normal op-proposer
service in the OP Stack.
The op-succinct
service consists of two containers:
op-succinct-server
: Receives proof requests from theop-succinct-proposer
, generates the witness for the proof, and submits the proof to the Succinct Prover Network. Handles the communication with the Succinct's Prover Network to fetch the proof status and completed proof data.op-succinct-proposer
: Monitors L1 state to determine when to request a proof. Sends proof requests to theop-succinct-server
. Once proofs have been generated for a sufficiently large range, aggregates range proofs into an aggregation proof. Submits the aggregation proof to theOPSuccinctL2OutputOracle
contract which includes the L2 state outputs.
We've packaged the op-succinct
service in a docker compose file to make it easier to run.
Prerequisites
RPC Requirements
Confirm that your RPC's have all of the required endpoints. More details can be found in the prerequisites section.
Hardware Requirements
We recommend the following hardware configuration for the op-succinct
service containers:
Using the docker compose file:
op-succinct
: 16 vCPUs, 16GB RAM
Running as separate containers:
op-succinct-server
: 16 vCPUs, 16GB RAMop-succinct-proposer
: 1 vCPU, 4GB RAM
For advanced configurations, depending on the number of concurrent requests you expect, you may need to increase the number of vCPUs and memory allocated to the op-succinct-server
container.
Environment Setup
Before starting the proposer, the following environment variables should be in your .env
file. You should have already set up your environment when you deployed the L2 Output Oracle. If you have not done so, follow the steps in the L2 Output Oracle section.
Parameter | Description |
---|---|
L1_RPC | L1 Archive Node. |
L1_BEACON_RPC | L1 Consensus (Beacon) Node. |
L2_RPC | L2 Execution Node (op-geth ). |
L2_NODE_RPC | L2 Rollup Node (op-node ). |
PROVER_NETWORK_RPC | Default: rpc.succinct.xyz . |
SP1_PRIVATE_KEY | Key for the Succinct Prover Network. Get access here. |
SP1_PROVER | Default: network . Set to network to use the Succinct Prover Network. |
PRIVATE_KEY | Private key for the account that will be deploying the contract and posting output roots to L1. |
L2OO_ADDRESS | Address of the OPSuccinctL2OutputOracle contract. |
Build the Proposer Service
Build the docker images for the op-succinct-proposer
service.
docker compose build
Run the Proposer
This command launches the op-succinct-proposer
service in the background. It launches two containers: one container that manages proof generation and another container that is a small fork of the original op-proposer
service.
After a few minutes, you should see the op-succinct-proposer
service start to generate range proofs. Once enough range proofs have been generated, they will be verified in an aggregate proof and submitted to the L1.
docker compose up
To see the logs of the op-succinct-proposer
service, run:
docker compose logs -f
and to stop the op-succinct-proposer
service, run:
docker compose stop
Configuration
Overview
The last step is to update your OP Stack configuration to use the new OPSuccinctL2OutputOracle
contract managed by the op-succinct-proposer
service.
⚠️ Caution: When upgrading to the
OPSuccinctL2OutputOracle
contract, maintain the existingfinalizationPeriod
for a duration equal to at least onefinalizationPeriod
. Failure to do so will result in immediate finalization of all pending output roots upon upgrade, which is unsafe. Only after this waiting period has elapsed should you set thefinalizationPeriod
to 0.
Upgrading L2OutputOracle
If your OP Stack chain's admin is a multi-sig or contract, you will need to use your ADMIN
key to update the existing L2OutputOracle
implementation. Recall that the L2OutputOracle
is a proxy contract that is upgradeable using the ADMIN
key.
EOA ADMIN
key
To update the L2OutputOracle
implementation with an EOA ADMIN
key, run the following command in /contracts
.
forge script script/OPSuccinctUpgrader.s.sol:OPSuccinctUpgrader \
--rpc-url $L1_RPC \
--private-key $PRIVATE_KEY \
--verify \
--verifier etherscan \
--etherscan-api-key $ETHERSCAN_API_KEY \
--broadcast \
--ffi
ADMIN
key is not an EOA
If the owner of the L2OutputOracle
is not an EOA (e.g. multisig, contract), set EXECUTE_UPGRADE_CALL
to false
. This will output the raw calldata for the upgrade call, which can be executed by the owner.
EXECUTE_UPGRADE_CALL=false forge script script/OPSuccinctUpgrader.s.sol:OPSuccinctUpgrader \
--rpc-url $L1_RPC \
--private-key $PRIVATE_KEY \
--verify \
--verifier etherscan \
--etherscan-api-key $ETHERSCAN_API_KEY \
--broadcast \
--ffi
...
== Logs ==
Upgrade calldata:
0x3659cfe60000000000000000000000003af9a0224e5370f31c07e6739c76b32d75b2d4af
Update contract parameter calldata:
0x7ad016520000000000000000000000000000000000000000000000000000000000003b03002d397eaa6f2bd3a873f2b996a6d486eb20774092e68a75471e287084180c133237870c3fe7a735661b52f641bd41c85a886c916a962526533c8c9d17dc08310000000000000000000000003b6041173b80e77f038f3f2c0f9744f04837185e7ca9e1e9829e0e28c934debd1adab0592b4a906d48b01d750ec46c02d09ad833
Advanced
This section contains advanced topics for OP Succinct.
Verify the OP Succinct binaries
When deploying OP Succinct in production, it's important to ensure that the SP1 programs used when generating proofs are reproducible.
Introduction
Recall there are two programs used in OP Succinct:
range
: Proves the correctness of an OP Stack derivation + STF for a range of blocks.aggregation
: Aggregates multiple range proofs into a single proof. This is the proof that lands on-chain. The aggregation proof ensures that allrange
proofs in a given block range are linked and use therangeVkeyCommitment
from theL2OutputOracleProxy
as the verification key.
Prerequisites
To reproduce the OP Succinct program binaries, you first need to install the cargo prove toolchain.
Ensure that you have the latest version of the toolchain by running:
sp1up
Confirm that you have the toolchain installed by running:
cargo prove --version
Verify the SP1 binaries
To build the SP1 binaries, first ensure that Docker is running.
docker ps
Then build the binaries:
cd programs/range
# Build the range-elf
cargo prove build --elf range-elf --docker
cd ../aggregation
# Build the aggregation-elf
cargo prove build --elf aggregation-elf --docker
Now, verify the binaries by confirming the output of vkey
matches the vkeys on the contract. The vkey
program outputs the verification keys
based on the ELFs in /elf
.
cargo run --bin vkey --release
L2 Node Setup
This guide will show you how to set up an L2 execution node (op-geth
) and a rollup node (op-node
) for your OP Stack chain.
Instructions
- Clone ops-anton and follow the instructions in the README to set up your rollup.
- Go to op-node.sh and set the
L2_RPC
to your rollup RPC. Modify thel1
andl1.beacon
to your L1 and L1 Beacon RPCs. Note: Your L1 node should be an archive node. - If you are starting a node for a different chain, you will need to modify
op-network
inop-geth.sh
here andnetwork
inop-node.sh
here. - In
/L2/op-mainnet
(or the directory you chose):- Generate a JWT secret
./generate_jwt.sh
docker network create anton-net
(Creates a Docker network for the nodes to communicate on).just up
(Starts all the services).
- Generate a JWT secret
Your op-geth
endpoint will be available at the RPC port chosen here, which in this case is 8547
(e.g. http://localhost:8547
).
Your op-node
endpoint (rollup node) will be available at the RPC port chosen here, which in this case is 5058
(e.g. http://localhost:5058
).
Check Sync Status
After a few hours, your node should be fully synced and you can use it to begin generating ZKPs.
To check your node's sync status, you can run the following commands:
op-geth:
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"eth_syncing","params":[],"id":1}' http://localhost:8547
op-node:
curl -H "Content-Type: application/json" -X POST --data '{"jsonrpc":"2.0","method":"optimism_syncStatus","params":[],"id":1}' http://localhost:5058
Cost Estimator
The cost estimator is a convenient CLI tool to fetch the RISC-V instruction counts for generating ZKPs for a range of blocks for a given rollup.
The cost estimator requires fast network connectivity (500+ Mbps) because witness generation is bandwidth-intensive. Remote machines empirically perform better.
Overview
In the root directory, add the following RPCs to your .env
file for your rollup:
Parameter | Description |
---|---|
L1_RPC | L1 Archive Node. |
L1_BEACON_RPC | L1 Consensus (Beacon) Node. |
L2_RPC | L2 Execution Node (op-geth ). |
L2_NODE_RPC | L2 Rollup Node (op-node ). |
More details on the RPC requirements can be found in the prerequisites section.
Running the Cost Estimator
You can run the cost estimator with unfinalized blocks as long as they're included in a batch posted to L1.
To run the cost estimator over a block range, run the following command:
RUST_LOG=info just cost-estimator <start_l2_block> <end_l2_block>
Overview
This command will split the block range into smaller ranges to model the workload run by op-succinct
. It will then fetch the required data for generating the ZKP for each of these ranges, and execute the SP1 range
program. Once each program finishes, it will collect the statistics and output the aggregate statistics.
Once the execution of the range is complete, the cost estimator will output the aggregate statistics and write them to a CSV file at execution-reports/{chain_id}/{start_block}-{end_block}.csv
.
The execution of the cost estimator can be quite slow, especially for large block ranges. We recommend first running the cost estimator over a small block range to get a sense of how long it takes.
Useful Commands
cast block finalized -f number --rpc-url <L2_RPC>
: Get the latest finalized block number on the L2.cast bn --rpc-url <L2_RPC>
: Get the latest block number on the L2.
Advanced Usage
There are a few optional flags that can be used with the cost estimator:
Flag | Description |
---|---|
batch-size | The number of blocks to execute in a single batch. For chains with higher throughput, you may want to decrease this value to avoid SP1 programs running out of memory. By default, the cost estimator will use a batch size of 300. For higher throughput chains, we've set the following defaults: Base (5), OP Mainnet (10), OP Sepolia (30). |
env-file | The path to the environment file to use. (Ex. .env.opmainnet ) |
use-cache | Use cached witness generation. Use this if you're running the cost estimator multiple times for the same block range and want to avoid re-fetching the witness. |
To run the cost estimator with a custom batch size, environment file, and using cached witness generation:
RUST_LOG=info cargo run --bin cost-estimator --release <start_l2_block> <end_l2_block> --env-file <path_to_env_file> --batch-size <batch_size> --use-cache
Sample Output
stdout
Executing blocks 5,484,100 to 5,484,200 on World Chain Mainnet:
Aggregate Execution Stats for Chain 480:
+--------------------------------+---------------------------+
| Metric | Value |
+--------------------------------+---------------------------+
| Batch Start | 5,484,100 |
| Batch End | 5,484,200 |
| Witness Generation (seconds) | 66 |
| Execution Duration (seconds) | 458 |
| Total Instruction Count | 19,707,995,043 |
| Oracle Verify Cycles | 1,566,560,795 |
| Derivation Cycles | 2,427,683,234 |
| Block Execution Cycles | 15,442,479,993 |
| Blob Verification Cycles | 674,091,948 |
| Total SP1 Gas | 22,520,841,820 |
| Number of Blocks | 101 |
| Number of Transactions | 1,977 |
| Ethereum Gas Used | 546,370,916 |
| Cycles per Block | 195,128,663 |
| Cycles per Transaction | 9,968,636 |
| Transactions per Block | 19 |
| Gas Used per Block | 5,409,613 |
| Gas Used per Transaction | 276,363 |
| BN Pair Cycles | 7,874,860,533 |
| BN Add Cycles | 310,550,754 |
| BN Mul Cycles | 1,636,223,094 |
| KZG Eval Cycles | 0 |
| EC Recover Cycles | 96,983,901 |
+--------------------------------+---------------------------+
csv
The CSV associated with the range will have the columns from the ExecutionStats
struct. The aggregate data for executing each "batch" within the block range will be included in the CSV.
Here, the CSV is execution-reports/480/5484100-5484200.csv
:
batch_start,batch_end,witness_generation_time_sec,total_execution_time_sec,total_instruction_count,oracle_verify_instruction_count,derivation_instruction_count,block_execution_instruction_count,blob_verification_instruction_count,total_sp1_gas,nb_blocks,nb_transactions,eth_gas_used,l1_fees,total_tx_fees,cycles_per_block,cycles_per_transaction,transactions_per_block,gas_used_per_block,gas_used_per_transaction,bn_pair_cycles,bn_add_cycles,bn_mul_cycles,kzg_eval_cycles,ec_recover_cycles
5484184,5484200,0,0,2877585481,299740342,462008456,2066943417,134844448,3304337522,17,316,81926057,540908658541982,596950839845253,169269734,9106283,18,4819179,259259,1017572318,40106182,211873811,0,11948870
5484100,5484120,0,0,3754402331,308914395,461086557,2932162167,134779302,4287957207,21,350,106933244,710095197624994,783053876268826,178781063,10726863,16,5092059,305523,1561122615,61455811,324588766,0,14705709
5484163,5484183,0,0,4005997705,311171459,435265918,3206173584,134844448,4570091686,21,365,110883055,690170871678571,779801718014140,190761795,10975336,17,5280145,303789,1676711949,66155955,346844998,0,17337504
5484121,5484141,0,0,4152572226,316652487,486166806,3293854188,134779302,4746305028,21,440,117222955,767310117504021,846733606470274,197741534,9437664,20,5582045,266415,1584230563,62520537,329178548,0,25124445
5484142,5484162,0,0,4917437300,330082112,583155497,3943346637,134844448,5612150377,21,506,129405605,935016666707488,1031433531465147,234163680,9718255,24,6162171,255742,2035223088,80312269,423736971,0,27867373
Block Data
The block-data
script is a convenient CLI tool to fetch the block & fee data for a given range of blocks on a rollup.
Performs better with high RPS supported on the L2 RPC endpoint.
Overview
To perform analysis on the fees collected on L2, you can use the block-data
script. This script will fetch the block & fee data for each block in the range from the L2 and output a CSV file with the columns: block_number
, transaction_count
, gas_used
, total_l1_fees
, total_tx_fees
.
Compared to the cost estimator, the block data script is much faster and requires less resources, so it's recommended to use this script if you only need the block data and want to calculate data quantities like average txns per block, avg gas per block, etc.
Once the script has finished execution, it will write the statistics for each block in the range to a CSV file at block-data/{chain_id}/{start_block}-{end_block}.csv
.
Run the Block Data Script
To run the block data script, use the following command:
RUST_LOG=info cargo run --bin block-data --release -- --start <start_l2_block> --end <end_l2_block>
Optional flags
Flag | Description |
---|---|
--env-file | The path to the environment file to use. (Ex. .env.opmainnet ) |
Useful Commands
cast block finalized -f number --rpc-url <L2_RPC>
: Get the latest finalized block number on the L2.cast bn --rpc-url <L2_RPC>
: Get the latest block number on the L2.
Sample Output
stdout
Fetching block data for blocks 5,484,100 to 5,484,200 on World Chain Mainnet:
Wrote block data to block-data/480/5484100-5484200-block-data.csv
Aggregate Block Data for blocks 5484100 to 5484200:
Total Blocks: 101
Total Transactions: 1977
Total Gas Used: 546370916
Total L1 Fees: 0.003644 ETH
Total TX Fees: 0.004038 ETH
Avg Txns/Block: 19.57
Avg Gas/Block: 5409613.03
Avg L1 Fees/Block: 0.000036 ETH
Avg TX Fees/Block: 0.000040 ETH
csv
block_number,transaction_count,gas_used,total_l1_fees,total_tx_fees
5710000,13,5920560,8004099318905,9272809586600
5710001,10,3000975,4810655220218,5353583855906
5710002,16,7269303,10842878782866,12174517937838
5710003,10,4521953,6142429255728,6967734553050
5710004,10,5558505,6534408691877,7550749486247
5710005,14,6670097,10210757953683,11431964042061
5710006,7,4725805,5863003171921,6725878188991
5710007,10,4798495,7011790976814,8252141668373
5710008,7,3805639,4428577556414,5121866899850
5710009,9,4348521,6732491769192,7697076627632
5710010,17,8728999,12317431205317,14022097163617
5710011,9,5882229,6229718004537,7411261930653
5710012,8,2053460,3062348125908,3938363190799
5710013,11,4829936,7756633163332,8721805365111
5710014,5,798837,1684650453190,2292476216082
5710015,10,4123290,7122749550608,7874581655693
5710016,8,1416529,3110958598569,3414612717107
...
FAQ
How is data availability proven?
The range
program proves the correctness of an OP Stack derivation + STF for a range of blocks. The BlobProvider
verifies that the raw data (compressed L2 transaction calldata) matches the blob hash, and the ChainProvider
verifies that the blob hashes belong to a certain L1 block hash. At this point, we've verified that compressed L2 transaction calldata is available against a specific L1 block.
Because the range
program can include an arbitrary number of blocks with blobs, we supply an l1BlockHash
to the verifier. Within the range
program, we verify that the blocks from which the blobs are extracted chain up to the l1BlockHash
. This l1BlockHash
is made accessible when verifying a proof via the checkpointBlockHash
function.