Permissioned components
These clients and services work together to enable the block production on the L2 network. Sequencer nodes (op-geth
+ op-node
) gather proposed transactions from users.
The batcher submits batch data to L1 which controls the safe blocks and ultimately
controls the canonical chain. The proposer submits output roots to L1 which control
L2 to L1 messaging.
op-geth
op-geth
implements the layer 2 execution layer, with minimal changes
for a secure Ethereum-equivalent application environment.
op-node
op-node
implements most rollup-specific functionality as the layer 2
consensus layer, similar to a layer 1 beacon-node. The op-node
is stateless and gets
its view of the world from op-geth
.
op-batcher
op-batcher
is the service that submits the L2 Sequencer data to L1, to make it available
for verifiers. To reduce the cost of writing to the L1, it only posts the minimal
amount of data required to reproduce L2 blocks.
op-deployer
op-deployer
is a tool that simplifies the process of deploying standardized OP Stack chains that comply with Superchain specifications.
It compares the state of your chain against the intent,
and makes whatever changes are necessary for them to match. In its current state,
it is intended to deploy new standard chains that utilize the Superchain wide contracts.
op-proposer
Sequencer
The Sequencer node works with the batcher and proposer to create new blocks. So it should handle the state changing RPC requesteth_sendRawTransaction
. It can be peered with
replica nodes to gossip new unsafe
blocks to the rest of the network.
Sequencer op-node should have p2p discovery disabled and only be statically peered with other internal nodes (or use peer-management-service to define peering network)
A production setup should include:
- At least one archive sequencer for recovering from deep L1 reorgs
- Secure P2P configuration with static peering
- Integration with op-conductor for high availability
- Disable P2P discovery and define a static peer list for internal nodes.
Sequencer isolation
To maintain network security, the Sequencer node is isolated from the internet and peers exclusively with internal replica nodes. It maintains a shared mempool with replicas for efficient transaction handling.Replica node
The replica nodes are additional nodes on the network. They can be peered to the Sequencer, which might not be connected to the rest of the internet, and other replicas. Additional replicas can help horizontally scale RPC requests.To run a rollup, you need a minimum of one archive node. This is required by the proposer as the data that it needs can be older than the data available to a full node. Note that since the proposer doesn’t care what archive node it points to, you can technically point it towards an archive node that isn’t the sequencer.
Ingress traffic
It is important to setup a robust chain architecture to handle large volumes of RPC requests from your users. The Sequencer node has the important job of working with the batcher to handle block creation. To allow the Sequencer to focus on that job, you can peer replica nodes to handle the rest of the work. An example of this would be to configure proxyd to route RPC methods, retry failed requests, load balance, etc. Configuration recommendations:Bootnodes
Bootnodes facilitate peer discovery for network nodes. They help initialize peer connections for both public and private nodes in the network.Archive RPC Nodes
We recommend setting up some archive nodes for internal RPC usage, primarily used by the challenger, proposer and security monitoring tools like monitorism.- You can also use these nodes taking disk snapshots for disaster recovery.
Full Snapsync Nodes
-
These nodes provide peers for snap sync, using the snapsync bootnodes for peer discovery.
Snapsync Bootnodes
- These bootnodes facilitate peer discovery for public nodes using snapsync.
- The bootnode can be either a geth instance or the geth bootnode tool.
- If you want to use geth for snapsync bootnodes, you may want to just make the Full Snapsync Nodes serve as your bootnodes as well.
P2P Bootnodes
- These are the op-node p2p network bootnodes. We recommend using the geth bootnode tool with discovery v5 enabled.
Offchain components
In addition to the core protocol components, the following offchain services are essential for running a production-grade rollup network.op-conductor
The op-conductor RPC can act as a leader-aware rpc proxy for op-batcher (proxies the necessary op-geth / op-node RPC methods if the node is the leader).op-challenger
The op-challenger verifies the correctness of the L2 state by challenging invalid state transitions. This ensures the network’s security and validity.Ethereum L1 nodes
These nodes are required to interface with the Ethereum mainnet, handling data submission and validation.Peering guidelines
Consensus layer peering
Ensure proper peer configurations for consensus layer clients. Use static peer lists for critical components and restrict P2P traffic to internal IP ranges.Execution layer peering
Execution layer clients, likeop-geth
, should be configured for efficient peer-to-peer communication. Utilize the following YAML configuration for static peers and network restrictions:
proxyd
This tool is an RPC request router and proxy. It does the following things:- Whitelists RPC methods.
- Routes RPC methods to groups of backend services.
- Automatically retries failed backend requests.
- Track backend consensus (latest, safe, finalized blocks), peer count and sync state.
- Re-write requests and responses to enforce consensus.
- Load balance requests across backend services.
- Cache immutable responses from backends.
- Provides metrics to measure request latency, error rates, and the like.
Next steps
- Find out how you can support snap sync on your chain.
- Find out how you can utilize blob space to reduce the transaction fee cost on your chain.