Recently, I've been playing with a few new things in the Sui ecosystem for several days and stumbled upon Walrus's Devnet. To be honest, I initially approached it with a mindset to watch a joke unfold, as the current decentralized storage space is as crowded as the Beijing subway during rush hour. Filecoin is playing dead over there, Arweave is so expensive that it makes someone like me, who is storing massive amounts of junk data, feel the pain, and Greenfield always gives me the illusion of being a rebranded cloud storage. So, I spent a lot of time this weekend running Walrus's nodes and conducting upload tests, which left me staring at the screen in a daze for half a day.

This is not to say that it is now flawless; on the contrary, the various small issues in the testnet almost made me want to smash my keyboard. However, what I see is not another project trying to cover up technical mediocrity with token incentives, but a genuine attempt to address the awkward gap between 'data availability' and 'storage costs'. Everyone is competing in the DA layer, and Celestia has driven the prices down, but the DA layer is for short-term validation of on-chain data. What I want to store are hundreds of megabytes of videos, several terabytes of training sets, or even an entire front-end page; at this point, you'll find that there’s almost nothing available on the market.

I tried to upload a 2GB encrypted dataset to Walrus, and the speed was surprisingly fast. This inevitably brings up its underlying Erasure Coding technology; although this term sounds familiar, the efficiency that Walrus achieves under Sui's Move architecture is indeed different. It does not require extremely expensive hardware like Filecoin to obtain replication proofs; that sacrifice of efficiency for decentralization seems a bit outdated in 2024. Walrus is more like a shrewd mover, breaking files into pieces, encoding redundancy, and then throwing them onto the network. As long as a small portion of fragments remain, I can restore the data. This Redstuff algorithm feels light in practical experience; the nodes run not like a mining machine, but more like a background program.

Comparing it to Arweave, although I respect AR's narrative of permanent storage, the one-time payment barrier is simply a nightmare for high-frequency interactive DApps. I want to create a decentralized application like Instagram, where users upload images every second; with AR, I would go bankrupt, and with IPFS, I would still have to find a Pinning Service and pay a monthly fee, ultimately reverting to the old path of Web2. Walrus gives me the impression that it does not want to be a museum; it wants to be a decentralized AWS S3. It does not forcefully bundle the heavy concept of 'permanence' but instead provides more flexible storage cycle options, which is the most practical solution for the vast majority of internet data with only a few years of lifespan.

During the testing process, I also found quite a few pain points, such as the current documentation being written like a pile of mush, with many parameter descriptions being ambiguous. If I don't go to Discord to sift through chat records, I couldn't even run the upload script. Also, the current node incentive model is still in a black box state; although the technical white paper sounds great, it is still unclear whether it is like Filecoin competing with hardware or like Sui competing with staking. This makes me hesitant to invest in mining machines. Moreover, the retrieval speed is extremely unstable on some nodes; sometimes it opens instantly, and other times it spins to the point where I question my existence. This indicates that the network topology is not fully optimized yet, or the quality of early nodes is inconsistent.

But this is precisely where the opportunity lies. If a project waits until it matures like Ethereum before you get in, what remains is only internal competition. Walrus, backed by Sui's high concurrency advantages, is very likely to solve the biggest pain point in the storage track: read-write consistency under high concurrency. The retrieval market of Filecoin has been stagnant for years, and even now it still looks the same; retrieving data is not only slow but also depends on the miners' whims. However, in Walrus's architecture, storage nodes and retrieval paths seem to be more decoupled. When I was reading that 2GB file, I did not feel any significant addressing delay; this experience was very close to traditional CDN.

There is an interesting game here, which is how Walrus handles garbage data. Since it's cheap, there will definitely be users filling it with junk. Arweave uses high prices to filter, while Walrus seems to want to adjust through a token model. I uploaded a bunch of meaningless gibberish on the testnet, and the system accepted it all, but I vaguely feel that there should be a mechanism similar to rent expiration when the mainnet is launched. This is actually healthier than permanent storage; 99% of data on the internet does not need to be kept for a hundred years. What we need is cheap, fast, censorship-resistant storage for ten years, not expensive digital tombstones.

For developers, being able to directly use Sui Move to control storage permissions is simply a dimensionality reduction attack. Previously, when creating NFTs on Ethereum, the images were actually stored on IPFS, and once the link broke, the images were gone. On Walrus, storage resources can be programmed, which means I can create a file that 'can only be viewed by holders of a specific NFT' without having to set up a Web2 server in between for authentication. This native integration's smoothness is the strongest feeling I had after running several demos. It is not an external hard drive; it is a nerve ending growing in the Sui ecosystem.

The current storage track desperately needs a catfish. Everyone talks about modularization, but the modularization of the storage layer has always been lukewarm. Walrus's current state is like Solana when it first came out, criticized for centralization and mocked for downtime, but only those who truly run the code know that the thrill brought by that throughput is irreplaceable. I am not sure if it can ultimately take down Filecoin; after all, FIL has a huge sunk cost moat, but Walrus will definitely carve out a significant piece of the 'hot data' cake.

If you are also one of those developers who feel despair with existing decentralized storage solutions, or a researcher looking for the next infrastructure Alpha, I recommend trying out their Devnet. Don't just look at the white paper; while the formulas there are rigorous, they cover up the roughness of engineering implementation. Feel that upload speed, and complain about that terrible documentation; you might instead catch a whiff of early profit. After being in this circle for a long time, you will find that those perfectly packaged projects are often harvesters, while the products with a solid technical foundation but still refining the experience are worth your effort to track.

My only concern right now is whether the project team will pull any tricks with the token distribution. The economic model of storage projects is the hardest to design; giving miners too much leads to price crashes, and if it’s too expensive for users, no one will use it. I hope Walrus can find that balance. After this round of testing, although I encountered quite a few errors, and sometimes the CLI crashed directly, that intuition of 'this thing can work' has become increasingly strong. This is not blind faith; it is based on a judgment of the direction of technological architecture evolution. With Redstuff combined with Sui's high performance, this combination can, as long as operations do not mess up, likely tear a hole in this stagnant storage market.

@Walrus 🦭/acc $WAL

WALSui
WALUSDT
0.1206
-0.41%

#Walrus