What if your storage model matched your hardware?

The storage engine touches 4,096 bytes for a 32-byte read.
MIP-8 makes the EVM account for that page-sized reality.

The information on this page should not be quoted. Please refer to MIP-8 for the authoritative spec.

waiting...

128 slots × 32 bytes = 4,096 bytes = 1 page

127 sibling slots stay unused in this example

Same struct, different cost model

Solidity lays out struct fields at contiguous slots, but trie/backend hashing can scatter them across different physical locations. In this worst-case illustration, each field lands on a separate backend page. MIP-8 groups contiguous slots into one page.

Click each field to load it and compare the gas cost side by side. Cold read costs 8,100 gas, warm read costs 100 gas on Monad.

struct Token { owner, balance, timestamp, approved }

Monad (current)

0 cold reads

page 0x7a2f...

page 0x3e81...

page 0xb4c5...

page 0x91d7...

0 x 8,100

0

MIP-8

0

page 0 (contiguous)

0

0

Compare the cost

Select a scenario to see the gas breakdown under Monad's current model versus MIP-8's page-aware model.

The read examples use Monad's gas constants (8,100 cold / 100 warm) and assume the accessed run fits in one page. The write example is qualitative because MIP-8 defines abstract page-write and state-growth parameters instead of fixed numbers.

Loading 4 contiguous struct fields that fit in one page

Monad (current)

32,400

gas

4 × 8,100 (distinct cold slots on Monad)

MIP-8

8,400

gas

1 × 8,100 (first page touch) + 3 × 100 (warm reads in same page)

Gas savings

74% cheaper

SavedMonad (current)

Watch storage accesses in real time

Step through real contract code line by line. Each SLOAD/SSTORE lights up the corresponding storage slot and shows whether it's a cold or warm access under MIP-8.

8 unique storage slots (5-12) accessed during swap(), all in one page

Uniswap V2 swap()

0 unique slots accessed

1
// UniswapV2Pair.sol
2
function swap(uint amount0Out, uint amount1Out, address to, bytes calldata data) external lock {
3
// lock modifier
4
require(unlocked == 1);  unlocked = 0;
5
 
6
(uint112 _reserve0, uint112 _reserve1,) = getReserves();
7
require(amount0Out > 0 || amount1Out > 0);
8
require(amount0Out < _reserve0 && amount1Out < _reserve1);
9
 
10
uint balance0 = IERC20(token0).balanceOf(address(this));
11
uint balance1 = IERC20(token1).balanceOf(address(this));
12
 
13
// _mintFee (called internally)
14
address feeTo = IUniswapV2Factory(factory).feeTo();
15
uint _kLast = kLast;
16
 
17
// _update
18
price0CumulativeLast += ...;
19
price1CumulativeLast += ...;
20
 
21
// end lock modifier
22
unlocked = 1;
23
}

Design for pages, get 10X+

MIP-8 doesn't just help existing contracts. It opens a new design space where page-aware storage yields order-of-magnitude improvements.

Consider an ERC-1155 multi-token contract. The standard implementation hashes each token balance to a random storage location. A page-aware design stores balances contiguously, so batch operations read one page instead of N scattered slots.

Standard ERC-1155

// balances scattered by keccak256

mapping(uint256 =>

mapping(address => uint256))

balances;

Each token ID hashes to a different page

page 0x7a..

page 0x3e..

page 0xb4..

page 0x91..

20 tokens = 20 cold reads = 162,000 gas

Page-aware design

// balances packed contiguously

uint256[128] balances;

// token IDs 0-127 map to one page

All token balances in one page

page 0

20 tokens = 1 cold + 19 warm = 10,000 gas

Batch size

Number of token balances read in one operation

20

tokens

212 (10X threshold)64

Standard layout

162,000

gas

20 x 8,100 (all cold)

Page-aware + MIP-8

10,000

gas

8,100 + 19 x 100

Improvement

16.2x

cheaper

94% gas saved

Cold-access gas comparison

16.2x cheaper

Page-aware + MIP-8: 10,000Standard layout: 162,000

This example shows a page-aware ERC-1155 that stores token balances in a contiguous array instead of a double mapping. With MIP-8, batch reads from the same page scale at 100 gas per additional slot instead of 8,100.

Slot → Page mapping

Every slot maps deterministically to a page. The math is simple: shift right by 7 bits to get the page, mask the low 7 bits to get the offset within it.

0127255383511

page_index(slot) = slot >> 7

0 >> 7 = 0

offset(slot) = slot & 0x7F

0 & 127 = 0

Page 0

Page 1

Page 2

Page 3

Try your own contract

Paste a GitHub repo URL or Solidity source to see how your contract's storage layout maps to pages.

Works best with small-to-medium repos. Large repos with many dependencies (e.g. Aave, Chainlink) may time out.

Try:

What this means for you

Structs get cheaper

Solidity stores struct members and array elements contiguously. Under MIP-8, a contiguous run that fits in one page is typically 1 cold page touch plus N - 1 warm slot accesses instead of N cold slot accesses.

Mappings change less

Mappings still derive storage locations from keccak256, so unrelated keys almost always land on different pages. MIP-8 rarely helps or hurts truly random access; it mostly rewards contiguous layouts.

New optimization patterns

Page-aware arrays, careful packing, and low-level layouts that keep related data inside the same 128-slot page open a new optimization space for page-aware gas costs.

Execution stays compatible

At the opcode level, execution semantics stay the same: SLOAD still returns 32 bytes and SSTORE still writes 32 bytes. What changes is the storage commitment/proof layer and the gas model, which become page-aware. The effective key space narrows from 2²⁵⁶ hashed slots to 2²⁴⁹ page indices.

Contracts that read consecutive storage slots often get cheaper because Solidity stores struct members, fixed arrays, and runs of dynamic-array elements contiguously once their base location is known. Mappings still use hashed locations, so mapping-heavy access patterns tend to change less. The main contracts at risk are those that hardcode opcode-gas assumptions for consecutive storage accesses.

Each 4,096-byte page is committed via a fixed binary tree built from the BLAKE3 compression function. 128 slots pair into 64 leaves, which hash through 6 levels into a single 32-byte root. An inclusion proof for any slot is about 257 bytes (1-byte index + target word + sibling word + 6 parent hashes), plus the MPT proof for the page commitment.

Continue the discussion on Monad Forum

Questions, feedback, or a better idea? Weigh in on the forum thread.

Open forum thread