BTXBatched Threshold Encryption
A committee? of servers decrypts any chosen subset of ciphertexts?while the rest stay private. It's the key primitive behind encrypted mempools? that stop MEV?.
Shortest ciphertext. Collision-free. Epochless.
Fast enough for tight latency budgets.
What MEV looks like
Every pending transaction on a public chain sits in the mempool?in plain view. Before a block is built, anyone can read what's coming. Bots read. Bots react.
That's how MEV? happens. Here are the three shapes it takes, and the fix.
01 · The mempool is public
pending transactions
Pending transactions are broadcast before they execute. Anyone watching the network sees them, including searchers running front-running? bots.
02 · Bots sandwich users
A bot spots a pending swap, buys ahead to push the price up, lets the user's swap execute at the worse price, then sells behind it. That's a sandwich?.
03 · Encrypt until block inclusion
all encrypted
If transactions are encrypted until the builder commits to a batch, there's nothing for bots to read. An encrypted mempool? cuts the attack surface at its source.
Encrypting is the easy part. The hard part is decrypting only the transactions the builder actually picks, fast, without dropping anything, without a trusted party that could reveal them early.
That's what threshold encryption? does: a committee of servers jointly holds the decryption key, and enough of them have to cooperate before any ciphertext is opened. BTX is a new scheme that makes it small, fast, collision-free, and setup-free, all at once, for the first time.
Without encryption
Bots read every pending transaction and jump the queue. Users pay more, sometimes get nothing.
With threshold encryption
Transactions stay encrypted until the builder picks a batch. Only that batch is decrypted. The rest stay private.
What BTX adds
Shorter ciphertexts, no dropped transactions, no setup ceremony, fast enough for real block times.
Why encrypted mempools are hard to build
Every prior batched threshold encryption scheme gives up at least one of the four properties a real encrypted mempool? needs: no dropped transactions, no recurring setup, small ciphertexts, and fast opens. Pick a scheme to see what breaks.
Batched IBE
Releasing the epoch key decrypts every pending transaction in the window, not just the ones the builder included. Transactions that didn't make it in still leak, so they can't safely roll over. Setup runs every epoch.
BTX fills the gap: the first BTE scheme that is collision-free?, epochless?, compact (ciphertext? as small as plain ElGamal?), and fast (decryption scales with the actual batch, not the maximum supported).
How BTX stacks up
Five properties matter for a usable encrypted mempool?. Every prior scheme drops at least one. Hover or focus a column to see why it matters.
| Scheme | |||||
|---|---|---|---|---|---|
| Batched IBE | ✓/✗ | ✗ | per epoch | O(B log² B) / O(Bmax log Bmax) | 3·|G₁| + |G_T| |
| Fernando et al. (TrX) | ✓ | ✓ | grows with sessions | O(B log² B) | 2·|G₁| + |G_T| |
| BEAT-MEV | ✗ | ✓ | one-time | O(B²) | 3·|G₁| + |G_T| |
| Gong et al. | ✓/✗ | ✗ | per epoch | O(B log² B) / O(Bmax log Bmax) | 2·|G₁| + |G_T| |
| BEAT++ (Agarwal) | ✗ | ✓ | one-time | O(Bmax log Bmax) | 2·|G₁| + |G_T| |
| PFE (Boneh et al.) | ✓ | ✓ | one-time | O(Bmax log Bmax)* | 2·|G₁| + |G_T| |
| BTX | ✓ | ✓ | one-time | O(B log B) | |G₁| + |G_T| |
Batched IBE
Setup
per epoch
Decryption
O(B log² B) / O(Bmax log Bmax)
Ciphertext
3·|G₁| + |G_T|
Fernando et al. (TrX)
Setup
grows with sessions
Decryption
O(B log² B)
Ciphertext
2·|G₁| + |G_T|
BEAT-MEV
Setup
one-time
Decryption
O(B²)
Ciphertext
3·|G₁| + |G_T|
Gong et al.
Setup
per epoch
Decryption
O(B log² B) / O(Bmax log Bmax)
Ciphertext
2·|G₁| + |G_T|
BEAT++ (Agarwal)
Setup
one-time
Decryption
O(Bmax log Bmax)
Ciphertext
2·|G₁| + |G_T|
PFE (Boneh et al.)
Setup
one-time
Decryption
O(Bmax log Bmax)*
Ciphertext
2·|G₁| + |G_T|
BTX
Setup
one-time
Decryption
O(B log B)
Ciphertext
|G₁| + |G_T|
Hover a property to learn why it matters.
B = actual batch size, Bmax = maximum supported. |G₁|, |G_T| are group element sizes (BLS12-381). Adapted from BTX, Table 1.
* Original PFE is O(B²); the BTX authors' FFT-based re-implementation reduces it to O(Bmax log Bmax), still ~2× slower than BTX in practice.
Four properties, all at once
BTX is the first BTE scheme with all four. Users don't coordinate to encrypt, setup happens once, and decryption cost scales with the actual batch rather than the worst case.
Compact
Ciphertext? is the same size as plain ElGamal?: one source element plus one target element. Every prior BTE scheme uses at least two source elements.
With BLS12-381: |G₁| = 48B, |G_T| = 576B
Collision-free
A user just encrypts. Nothing to collide on, so no censorship via index collision?.
✗ Alice tx censored
tap a tab to replay
Epochless
A ciphertext isn't bound to a block. If it isn't included in N, it stays valid for N+1 and beyond. No epochs?.
Epoch-bound BTE
Block N
Block N+1
BTX
Block N
Block N+1
Fast · dynamic batch sizing
Decryption is O(B log B) where B is the actual batch. Prior schemes pay for the maximum Bmax, always.
PFE / BEAT++ pay for 81% max every block. BTX pays only for the actual batch.
The construction, at a glance
BTX builds on pairing-friendly elliptic curves (BLS12-381). What follows is a sketch of how users encrypt, how the committee opens a batch, and the trick that keeps opening a big batch cheap. The paper has the full protocol, proofs, and security reductions.
An ElGamal-shaped ciphertext
A user encrypts their transaction m by picking a random value r. The ciphertext is a pair: the randomness in G₁, and the message masked by a pad derived from r and the public encryption key in G_T. That pair sits in the mempool until the committee opens it.
[r]₁
randomness in G₁
m + r · ek
masked message in G_T
Core size: |G₁| + |G_T|. A short Schnorr NIZK rides alongside for CCA security.
One G₁ element per server, regardless of batch size
Powers of the secret key τ are Shamir-shared? across N servers, one piece each. Any t+1 of them together decrypt a batch. Each server sends exactly one group element to the combiner, so its message size stays independent of batch size.
How BTX reaches O(B log B) inside the scheme
Written the obvious way, BTX's decryption costs O(B²) pairings?: every ciphertext contributes a cross-term to every other. BTX observes that those cross-terms form a contiguous window of a polynomial product, computable as a middle-product via FFT?. That brings it down to O(B log B) group operations and O(B) pairings. Same scheme, same output, just the internal arithmetic.
Without FFT: O(B²)
With FFT: O(B log B)
Faster than the best prior schemes
The authors reimplemented PFE? and BEAT++ in the same aggressively-optimized C++ codebase as BTX: AVX-512, FFT backends, tuned MSM and pairing? paths. That makes this a comparison against tuned baselines, not reference code.
Decryption time vs batch size. Drag across the chart to inspect.
B
512
PFE precompute
1.596ms/ct
BTX precompute
0.959ms/ct
Speedup
1.66×
BEAT++ traces BTX within fractions of a millisecond — same internal algebra, but BEAT++ is collision-prone. The censorship resistance is free.
Intel Xeon Platinum 8488C (3.8 GHz), blst over BLS12-381 with AVX-512, Clang 21.1.8. Per-ciphertext from BTX Table 4; totals from the abstract.
BTX: Simple and Efficient Batch Threshold Encryption
Amit Agarwal · Sourav Das · Babak Poorebrahim Gilkalaye · Peter Rindal · Victor Shoup, Category Labs
Read the paper (PDF)Related work cited by BTX
Weighted batched threshold encryption
Agarwal et al. (BEAT++) · ePrint 2025/2115
TrX: Encrypted mempools in high-performance BFT protocols
Fernando, Policharla, Tonkikh, Xiang · ePrint 2025/2032
Efficient batch threshold encryption using partial fraction techniques
Boneh, Nema, Roy, Tas (PFE) · ePrint 2026/674
This page is an interactive summary maintained by MIP Land. For formal definitions, security proofs, and the complete construction, read the paper. BTX is Category Labs research, not a Monad Improvement Proposal.