After testing Bitroot, my biggest feeling is that 'the surprise exceeded expectations.' Over the years, I have tested countless public chains, from early shard technology attempts to various Layer 2 solutions. The vast majority of so-called technical breakthroughs are just patch optimizations on some module, while the underlying framework still cannot escape the inherent framework of traditional blockchain single-threaded operation, where one must carefully avoid problems.
When I first tested Bitroot, I actually didn't have high expectations. I subconsciously thought it was likely just another optimized Gas model or a modified consensus parameter EVM acceleration version. The first pleasant surprise quietly arrived during the environment setup phase. I habitually went to look for full node code similar to Geth or Erigon, prepared to spend half a day compiling and syncing the node, only to find that Bitroot's node startup logic is more like initializing a distributed task scheduler.
In the initialization function of node/service.go, I focused on several core service modules—SchedulerService, ParallelExecutor, and PipelineConsensus—which are all started in a parallel loading manner. It is evident that Bitroot has truly implemented many new technological designs according to the planning in the white paper. This architectural design itself implies a completely different thinking logic from traditional public chains: it is not obsessed with making a super monolithic node 'run faster,' but instead decomposes the core tasks of the blockchain—verifying transactions and reaching consensus—into a series of pipelined operations.
To verify its performance, I wrote a script that continuously sent 100 transfer transactions to the test network. It should be noted that on traditional public chains, such transactions involving the creation of numerous new account states often lead to network congestion, significantly increasing Gas fees. However, on Bitroot, I clearly saw through the node debugging logs that these transactions were quickly sorted into different processing batches. One line in the log stands out: '[Scheduler] Batch 73: 24 txns, 0 conflicts, dispatched to 3 executors.'
Behind this log is Bitroot's core scheduling logic: the scheduler accurately identifies that there are no state dependencies among these 24 transactions—after all, the sending and receiving addresses of each transaction are different—so it directly assigns them to three executor threads for parallel processing. It sounds simple, but achieving this in an EVM-compatible environment is extremely challenging: it is essential to accurately predict which states each transaction will touch on the chain before execution. This is also the most challenging problem in traditional EVM design—because transactions can dynamically call contracts, their execution paths cannot be predicted before runtime.
After this preliminary testing period, I have reached a core conclusion: Bitroot has not attempted to alter the execution logic of the EVM 'black box' itself, but instead has added a layer of intelligent scheduler on its outside. In other words, before a transaction enters the EVM for execution, this scheduler quickly and accurately predicts the relevance of each transaction at a very low cost; unrelated transactions are directly assigned to parallel threads for processing, while related transactions are queued in order, perfectly balancing compatibility and parallel efficiency.
In addition, Bitroot's lightweight read-write set tracker is the core data foundation for achieving optimistic parallelism. Interestingly, unlike public chains like Aptos that use Move VM for static analysis, Bitroot has implemented an efficient runtime tracking mechanism while fully compatible with the dynamic nature of the EVM.
Another impressive aspect is its consensus mechanism—Pipeline BFT. Traditional BFT consensus is like single-lane traffic, limited in efficiency; whereas Bitroot's Pipeline BFT achieves 'four-lane parallelism.' The node logs clearly show: when block N-1 is in the Commit state, block N is executing Precommit operations, and block N+1 has already entered the Prevote stage. This pipeline design, which decouples and overlaps the consensus phases, fundamentally changes the block production rhythm of traditional public chains.
In the simulated testing environment, Bitroot's block final confirmation time stabilizes at around 400 milliseconds, highly consistent with the performance metrics described in the white paper. This means that a transaction from issuance to permanent confirmation across the network usually does not exceed 1 second, with an experience almost indistinguishable from centralized platforms.
I also paid special attention to its BLS signature aggregation implementation: the block signature data broadcasted over the network is compressed to a constant 96 bytes, regardless of the number of validators in the network, nodes only need to perform a pairing operation once to complete the verification. This design reduces the communication complexity of signatures from O(n²) to O(n), which is crucial for the scalability of public chains.
The preliminary test results of Bitroot are already quite impressive. I will continue to deploy contracts for more dimensional testing, and I am eagerly looking forward to the official launch of the four-piece DeFi suite, to see how many new surprises this project, which breaks the traditional public chain thinking, can bring.
