This weekend I spent two days running through the Walrus testnet nodes. To be honest, the experience was better than expected, but there were also some points to criticize. Previously, when working on decentralized storage projects, the most frustrating part was the premium for permanent storage like Arweave, which is completely overkill for most non-financial Blob data. Walrus is taking a different approach; it is working on a storage solution based on erasure coding within the Sui ecosystem, clearly aiming for "high-frequency interactions".
I tested uploading a few hundred megabytes of video files, and Walrus's response speed is surprisingly fast, almost like Web2's S3, which is much better than Filecoin. The retrieval market for Filecoin has not yet fully worked, and retrieving data is so slow that it makes you want to smash your keyboard; it can basically only be used for cold storage. However, the current architectural design of Walrus clearly aims to capture the NFT metadata and DApp frontend hosting cake. However, during the testing process, I also discovered a bug; the CLI command-line tool sometimes reports inexplicable connection errors, and it takes a few retries to work, probably due to node synchronization issues.
From a technical perspective, it utilizes Sui's consensus mechanism to manage storage metadata, which is a clever approach that avoids the pitfall of creating a bulky public chain. Compared to solutions like EthStorage that completely rely on Ethereum L1 security, Walrus's cost control is more flexible. However, the documentation at this stage is too brief, and many parameters need to be checked in the source code to understand. If the mainnet can maintain this throughput, it can indeed solve the current pain point of slow loading of on-chain media resources. This kind of "lightweight" storage narrative is much more pragmatic than those projects that frequently talk about the "human civilization database".
@Walrus 🦭/acc $WAL


