The first time I heard “verifiable compute,” I felt relief.
Finally. Proof instead of promises. A system that can show its work.
Then the obvious question showed up and ruined the mood.
Verifiable… according to what hardware?
Because if Fabric Foundation leans on VPUs and attestation, the trust story doesn’t start on-chain. It starts in a factory. It starts in a shipping box. It starts with a person who had access for ten minutes and a screwdriver.
This is the part people skip because it sounds like boring ops.
That’s cute. Ops is where security lives.
Attestation is only as strong as the root you’re attesting from. If the device is compromised, the proofs don’t become “less trustworthy.” They become confidently wrong. The worst kind of wrong. The kind that looks clean in logs and passes checks while something rotten sits underneath.
So the real question isn’t “can Fabric verify computation.”
The real question is: can Fabric trust the origin of the verifier?
Who manufactured the unit? Where was it assembled? Who had access to the attestation keys? Were the keys generated securely? Were they ever copied? Was the firmware flashed in a controlled environment or in someone’s garage? Did the unit ship directly, or did it sit in a warehouse where “nobody touched it”?
And on the way to the customer, who handled it?
Because hardware moves through hands. Distributors. Integrators. Contractors. Repair shops. Customs. Warehouses. Any one of those is an opportunity. Not even for a genius attacker. Sometimes just for an opportunist with time.
A tampered device doesn’t need to fail loudly. It just needs to lie reliably.
That’s why “who attests the attester?” isn’t a philosophical question. It’s the entire trust model.
If Fabric wants VPUs to be meaningful infrastructure, it needs a provenance story. A chain of custody story. A tamper evidence story. And a recovery story for when things go wrong, because they will.
Provenance means you can trace where a unit came from and what it claims to be. Not in marketing copy. In verifiable artifacts. Manufacturing records. Signed certifications. Hardware identity that can’t be casually reissued.
Chain of custody means you know who touched it and when. Again, not because people are honest. Because people are human. Incentives exist. Mistakes exist. So the system needs a way to make “we don’t know what happened” rare and expensive.
Tamper evidence means there’s a difference between a clean unit and a questionable one. Secure elements. Seals. Measurement logs. Remote attestation that can detect unexpected state. The goal isn’t making tampering impossible. The goal is making tampering detectable enough that the network can quarantine the device before it becomes a trusted liar.
And then there’s certification.
Who gets to certify that an attestation key is legitimate? A single vendor doing approvals behind closed doors is a fast path to adoption and a slow path to decentralization. But a fully permissionless attestation registry is an invitation for counterfeit and clone factories.
So you end up back in the uncomfortable middle.
Some kind of registry. Some kind of onboarding. Some kind of trust anchor. And the politics of who controls it.
This is where a lot of “verifiable” projects quietly die. Not because the cryptography is wrong. Because the physical world is uncontrollable. Hardware has inertia. Supply chains have entropy. And the easiest way to subvert a proof system is to compromise the thing producing the proofs.
If Fabric Foundation solves this, it won’t be glamorous. It’ll look like boring standards. Audits. Manufacturing partners. Revocation lists. Key rotation procedures. Quarantine rules. A way to say “this unit is trusted,” and also a way to say “this unit is no longer trusted,” without collapsing the whole network.
That’s real infrastructure.
Because in the end, verifiable compute isn’t just a software problem.
It’s a logistics problem wearing a cryptography hat.
