BTXBatched Threshold Encryption

A committee? of servers decrypts any chosen subset of ciphertexts?while the rest stay private. It's the key primitive behind encrypted mempools? that stop MEV?.

Shortest ciphertext. Collision-free. Epochless.
Fast enough for tight latency budgets.

users encrypt
0 revealed·0 stay private
encryptedselecteddecrypted

What MEV looks like

Every pending transaction on a public chain sits in the mempool?in plain view. Before a block is built, anyone can read what's coming. Bots read. Bots react.

That's how MEV? happens. Here are the three shapes it takes, and the fix.

01 · The mempool is public

pending transactions

swap 100 USDC → ETH👁 read
approve DEX
transfer 2.4 ETH
mint NFT
swap 50 ETH → USDC

Pending transactions are broadcast before they execute. Anyone watching the network sees them, including searchers running front-running? bots.

02 · Bots sandwich users

pool price100.0
bot buyspushes price up
user swapspending
bot sellsprofit

A bot spots a pending swap, buys ahead to push the price up, lets the user's swap execute at the worse price, then sells behind it. That's a sandwich?.

03 · Encrypt until block inclusion

all encrypted

████████████████
████████████████
████████████████
████████████████
████████████████

If transactions are encrypted until the builder commits to a batch, there's nothing for bots to read. An encrypted mempool? cuts the attack surface at its source.

Encrypting is the easy part. The hard part is decrypting only the transactions the builder actually picks, fast, without dropping anything, without a trusted party that could reveal them early.

That's what threshold encryption? does: a committee of servers jointly holds the decryption key, and enough of them have to cooperate before any ciphertext is opened. BTX is a new scheme that makes it small, fast, collision-free, and setup-free, all at once, for the first time.

Without encryption

Bots read every pending transaction and jump the queue. Users pay more, sometimes get nothing.

With threshold encryption

Transactions stay encrypted until the builder picks a batch. Only that batch is decrypted. The rest stay private.

What BTX adds

Shorter ciphertexts, no dropped transactions, no setup ceremony, fast enough for real block times.

Why encrypted mempools are hard to build

Every prior batched threshold encryption scheme gives up at least one of the four properties a real encrypted mempool? needs: no dropped transactions, no recurring setup, small ciphertexts, and fast opens. Pick a scheme to see what breaks.

Batched IBE

Releasing the epoch key decrypts every pending transaction in the window, not just the ones the builder included. Transactions that didn't make it in still leak, so they can't safely roll over. Setup runs every epoch.

BTX fills the gap: the first BTE scheme that is collision-free?, epochless?, compact (ciphertext? as small as plain ElGamal?), and fast (decryption scales with the actual batch, not the maximum supported).

How BTX stacks up

Five properties matter for a usable encrypted mempool?. Every prior scheme drops at least one. Hover or focus a column to see why it matters.

Batched IBE

✓/✗Collision-free
Epochless

Setup

per epoch

Decryption

O(B log² B) / O(Bmax log Bmax)

Ciphertext

3·|G₁| + |G_T|

Fernando et al. (TrX)

Collision-free
Epochless

Setup

grows with sessions

Decryption

O(B log² B)

Ciphertext

2·|G₁| + |G_T|

BEAT-MEV

Collision-free
Epochless

Setup

one-time

Decryption

O(B²)

Ciphertext

3·|G₁| + |G_T|

Gong et al.

✓/✗Collision-free
Epochless

Setup

per epoch

Decryption

O(B log² B) / O(Bmax log Bmax)

Ciphertext

2·|G₁| + |G_T|

BEAT++ (Agarwal)

Collision-free
Epochless

Setup

one-time

Decryption

O(Bmax log Bmax)

Ciphertext

2·|G₁| + |G_T|

PFE (Boneh et al.)

Collision-free
Epochless

Setup

one-time

Decryption

O(Bmax log Bmax)*

Ciphertext

2·|G₁| + |G_T|

BTX

Collision-free
Epochless

Setup

one-time

Decryption

O(B log B)

Ciphertext

|G₁| + |G_T|

Hover a property to learn why it matters.

B = actual batch size, Bmax = maximum supported. |G₁|, |G_T| are group element sizes (BLS12-381). Adapted from BTX, Table 1.
* Original PFE is O(B²); the BTX authors' FFT-based re-implementation reduces it to O(Bmax log Bmax), still ~2× slower than BTX in practice.

Four properties, all at once

BTX is the first BTE scheme with all four. Users don't coordinate to encrypt, setup happens once, and decryption cost scales with the actual batch rather than the worst case.

Compact

Ciphertext? is the same size as plain ElGamal?: one source element plus one target element. Every prior BTE scheme uses at least two source elements.

BEAT-MEV / Batched IBE720B
G₁
G₁
G₁
G_T
TrX / BEAT++ / PFE672B
G₁
G₁
G_T
BTX624B
G₁
G_T

With BLS12-381: |G₁| = 48B, |G_T| = 576B

Collision-free

A user just encrypts. Nothing to collide on, so no censorship via index collision?.

alice
ct_A
idx 7
attacker
ct_attack
idx 7

✗ Alice tx censored

tap a tab to replay

Epochless

A ciphertext isn't bound to a block. If it isn't included in N, it stays valid for N+1 and beyond. No epochs?.

Epoch-bound BTE

Block N

ct tagged N

Block N+1

✓ still valid

BTX

Block N

ct

Block N+1

✓ still valid

Fast · dynamic batch sizing

Decryption is O(B log B) where B is the actual batch. Prior schemes pay for the maximum Bmax, always.

Actual batch B128
16Bmax = 512
Prior · O(Bmax log Bmax)4608 units
BTX · O(B log B)896 units

PFE / BEAT++ pay for 81% max every block. BTX pays only for the actual batch.

The construction, at a glance

BTX builds on pairing-friendly elliptic curves (BLS12-381). What follows is a sketch of how users encrypt, how the committee opens a batch, and the trick that keeps opening a big batch cheap. The paper has the full protocol, proofs, and security reductions.

An ElGamal-shaped ciphertext

A user encrypts their transaction m by picking a random value r. The ciphertext is a pair: the randomness in G₁, and the message masked by a pad derived from r and the public encryption key in G_T. That pair sits in the mempool until the committee opens it.

ct =[r]₁,m + r · ek

[r]₁

randomness in G₁

m + r · ek

masked message in G_T

Core size: |G₁| + |G_T|. A short Schnorr NIZK rides alongside for CCA security.

One G₁ element per server, regardless of batch size

Powers of the secret key τ are Shamir-shared? across N servers, one piece each. Any t+1 of them together decrypt a batch. Each server sends exactly one group element to the combiner, so its message size stays independent of batch size.

How BTX reaches O(B log B) inside the scheme

Written the obvious way, BTX's decryption costs O(B²) pairings?: every ciphertext contributes a cross-term to every other. BTX observes that those cross-terms form a contiguous window of a polynomial product, computable as a middle-product via FFT?. That brings it down to O(B log B) group operations and O(B) pairings. Same scheme, same output, just the internal arithmetic.

Without FFT: O(B²)

With FFT: O(B log B)

Without FFT: 0 / 64With FFT: 0 / ~22

Faster than the best prior schemes

The authors reimplemented PFE? and BEAT++ in the same aggressively-optimized C++ codebase as BTX: AVX-512, FFT backends, tuned MSM and pairing? paths. That makes this a comparison against tuned baselines, not reference code.

2.0×faster total decryption at B=512
PFE1197 ms
BTX598 ms
4.2×faster per ciphertext · 1 pairing vs 4
PFE0.723 ms/ct
BTX0.171 ms/ct

Decryption time vs batch size. Drag across the chart to inspect.

PFE precomputeBTX precomputeBEAT++ precompute
0.00.51.01.5ms / ct3264128256512Batch size B (log scale)

B

512

PFE precompute

1.596ms/ct

BTX precompute

0.959ms/ct

Speedup

1.66×

BEAT++ traces BTX within fractions of a millisecond — same internal algebra, but BEAT++ is collision-prone. The censorship resistance is free.
Intel Xeon Platinum 8488C (3.8 GHz), blst over BLS12-381 with AVX-512, Clang 21.1.8. Per-ciphertext from BTX Table 4; totals from the abstract.

BTX: Simple and Efficient Batch Threshold Encryption

Amit Agarwal · Sourav Das · Babak Poorebrahim Gilkalaye · Peter Rindal · Victor Shoup, Category Labs

Read the paper (PDF)

This page is an interactive summary maintained by MIP Land. For formal definitions, security proofs, and the complete construction, read the paper. BTX is Category Labs research, not a Monad Improvement Proposal.