Background

Proto-danksharding is coming to Ethereum Mainnet with the Dencun upgrade. This feature primarily benefits rollups.

"Once EIP-4844 is deployed to mainnet [...] we expect the cost of rollup L1 transactions to be reduced by at least 20x. We expect all rollups to take advantage of blobspace to reduce transaction costs for their users." - OP Labs

Salient notes from the FAQ:

The main feature introduced by proto-danksharding is new transaction type, which we call a blob-carrying transaction. A blob-carrying transaction is like a regular transaction, except it also carries an extra piece of data called a blob. Blobs are extremely large (~125 kB), and can be much cheaper than similar amounts of calldata. However, blob data is not accessible to EVM execution; the EVM can only view a commitment to the blob.

[This will] lead to a long-run maximum usage of ~1 MB per slot (12s). This works out to about 2.5 TB per year, a far higher growth rate than Ethereum requires today.

The consensus layer can implement separate logic to auto-delete the blob data after some time (eg. 30 days)

In general, long-term historical storage is easy. While 2.5 TB per year is too much to demand of regular nodes, it’s very manageable for dedicated users: you can buy very big hard drives for about $20 per terabyte, well within reach of a hobbyist. Unlike consensus, which has a N/2-of-N trust model, historical storage has a 1-of-N trust model: you only need one of the storers of the data to be honest. Hence, each piece of historical data only needs to be stored hundreds of times, and not the full set of many thousands of nodes that are doing real-time consensus verification.

Fetching blobs: https://ethereum.github.io/beacon-APIs/#/Beacon/getBlobSidecars

Problem

The Graph ecosystem is well placed to support with long-term storage and availability of these blobs.

Three exam questions:

  1. How can we ensure that The Graph ecosystem stores all historical blobs from day zero?
  2. What is the right way to provide access to historical blobs?
  3. How should we bring this functionality to The Graph Network?

Proposed solutions

Open questions