The Core Problem RedStuff Solves

Any decentralized storage system must solve three competing goals:

High availability (data always retrievable)

Low redundancy (storage isn’t insanely expensive)

Byzantine tolerance (nodes can be malicious, not just offline)

Most systems sacrifice one:

IPFS → cheap, but no guarantees

Filecoin → guarantees, but huge replication overhead

Arweave → permanent, but inefficient and archival

RedStuff solves all three simultaneously.

Why Simple Erasure Coding Is Not Enough

Classic erasure coding (Reed–Solomon)

Split file into k chunks

Add n−k parity chunks

Any k of n can reconstruct the file

Problem:

Assumes honest but failing nodes

Breaks under Byzantine behavior

Walrus assumes malicious nodes, not just crashes.

Two-Dimensional Erasure Coding

Instead of a 1D stripe, RedStuff encodes data in a 2D matrix.

Step 1: Blob → Matrix

Suppose a blob is split into a matrix:

D11 D12 D13

D21 D22 D23

D31 D32 D33

Each Dij is a data block.

Step 2: Encode Rows and Columns

RedStuff applies erasure coding twice:

Row encoding

Each row gets parity blocks:

Row 1: D11 D12 D13 | R1

Row 2: D21 D22 D23 | R2

Row 3: D31 D32 D33 | R3

Column encoding

Each column gets parity blocks:

Col 1: D11 D21 D31 | C1

Col 2: D12 D22 D32 | C2

Col 3: D13 D23 D33 | C3

Now every data block is protected horizontally and vertically.

This is the 2D redundancy.

Why This Is Byzantine-Resilient

Let’s analyze failure modes.

Case A: Nodes go offline

Row codes recover missing blocks

Column codes recover missing blocks

Case B: Nodes lie (Byzantine)

Invalid slivers are detected via:

cryptographic commitments

consistency checks across dimensions

Case C: Adversarial delay

Nodes cannot wait to see challenges

Proofs are asynchronous

Delays = penalties

A node must continuously store data, not fake it later.

Mathematical Guarantees

Let:

f = fraction of Byzantine nodes

n = total nodes

k = data threshold

RedStuff guarantees:

Data availability if < 1/3 of slivers are Byzantine

Recovery bandwidth proportional to lost data only

Reconstruction probability → 1 as matrix grows

This is far stronger than 1D erasure coding.

@Walrus 🦭/acc #walrus $WAL

WALSui
WAL
0.1361
-3.13%