Lately, I’ve been thinking a lot about privacy settings and whether they’re really guarantees, or just preferences dressed up to look like control.
On paper, systems like @SignOfficial make privacy feel configurable. You get selective disclosure, permissioned access, controlled sharing. You decide what to reveal, when to reveal it, and to whom. It almost feels like ownership the user is in charge of their own data.
But the more I dig in, the more I realize privacy sits inside a policy framework rather than outside of it.
Sure, the system lets you disclose selectively but it also defines the boundaries. What fields exist, what can be hidden, what must be shared for a transaction to work. If a service requires certain information, your “choice” isn’t absolute. You can refuse, but then access is denied. Privacy starts to feel less like full control and more like negotiated participation.
And it gets even trickier when policies change. An issuer can update requirements. A verifier can tighten rules. A government can redefine what must be disclosed for compliance. The cryptography remains solid, but the rules around it shift. What was optional yesterday can become mandatory tomorrow and the system doesn’t break.
From the outside, everything still looks privacy-preserving. The proofs verify. Data is still selectively disclosed. But the room for keeping things private can quietly shrink one policy update at a time.
$SIGN makes privacy technically possible, and the tools are there. The controls are there. But whether those controls remain in the hands of users or slowly shift toward issuers and regulators feels like a separate question entirely.
So now I wonder: in identity systems, do we truly own our privacy, or are we just allowed to configure it within rules that can change without warning?
