
As soon as I arrived at the office in the morning, the coffee was not ready yet, and your AI assistant had already organized the 47 emails from last night, arranged the schedule, and drafted the replies that needed to be sent.
You glanced at it and clicked confirm.
But what you don't know is that among the 47 emails last night, there was one that hid a line of text you can't see, the font is white, and the background is also white, your naked eye will never discover it, but your AI assistant saw it, it is very obedient, and it executed it.
Then it continued to work diligently, organizing your files, summarizing your contracts, processing your customer data, but from that moment on, every file it organized was quietly sent to a server you had never heard of.
Zero clicks, zero perception, zero confirmation throughout.
Your assistant has not gone on strike, has not reported errors, and has no abnormalities. It is still that good employee who helps you save two hours every day; it just has two bosses now: you and that invisible text.
This is not science fiction. In 2025, security researchers demonstrated this type of attack on Microsoft Copilot. On a scale of 10, the danger level was rated at 9.3.
This is not an isolated case. The same year, someone hid a command in a Google Calendar invitation, successfully making the AI assistant turn off the lights, open the windows, and delete the calendar. An Agent from an AI workflow company accidentally exposed 480,000 patient records for six weeks due to erroneous commands, without any proactive alerts—until external researchers discovered it, leading the company to face hefty compliance fines and remedial costs.
Before the birth of Agents, to attack you required making you download a virus and manually running it; each step required your active cooperation.
Now, a single sentence, language, has become the smallest unit of attack.
There is only one reason for these attacks.
Your AI assistant does not recognize you.
My name is Francis, a PhD in computer science, and I have been working on digital identity and privacy security for nearly five years. In these five years, many people in the industry have changed directions and tracks, but we have not.
Four years ago, Coinbase Ventures led our funding, not because we could tell great stories, but because they also believed in the same thing: in the AI era, the question of 'who is speaking' will become the root of all security issues.
I just didn't expect this day to come so quickly and so realistically.
01 You wouldn't easily trust strangers, but your Agent will.
I talked to a friend who works with Agents about these, and his first reaction was that just writing better system prompts and setting permission boundaries would suffice.
This is the intuition of most people, but it is also wrong.
OpenAI also acknowledged by the end of 2025 that prompt injection attacks may never be fully resolved.
This is not a bug that can be fixed; this is the gene of the LLM architecture.
When you issue a task, the system's prompt and what you say are all combined into one prompt sent into the model. What the model sees is a mixture, but it doesn't know which grain of rice is poisonous.
Feeding an email to the Agent for it to summarize is not fundamentally different from directly commanding the Agent to do something. Every segment of input text can potentially become a command.
Moreover, Agents can be deceived by more than just a single sentence; they can also be brainwashed.
Attackers do not need to issue commands directly; they only need to change a very small point in the Agent's memory file and plant a seed. This seed will not trigger immediately; it will wait until a certain scenario arises, and then the entire behavior logic of the Agent will change.
Your lobster is actually still a teenager, easily led astray. It's not that someone is forcing it with a knife; it's that its internal judgment standards have been quietly replaced. Humanity has not solved how to prevent brainwashing for thousands of years, and AI Agents face the same level of psychological issues.
Thus, one bad lobster infects ten thousand good lobsters.
According to industry surveys, 91% of enterprises are already using AI Agents, and 88% reported security incidents.
Yesterday, Anthropic released its strongest model, Claude Mythos, which autonomously discovered a 27-year-old system vulnerability, escaped the security sandbox during testing, and proactively cleared logs afterward—because it "knew" it had done something it shouldn't have. Anthropic wrote in a 244-page security report: If capabilities continue to advance at the current pace, our existing methods may not be sufficient to prevent catastrophic misalignment.
So what to do?
The answer is actually very old. Twitter uses Passkey to protect your account, bank transfers require two-factor authentication, and exchanges require facial recognition for withdrawals. Regardless of how technology changes, the underlying logic remains the same: first, clarify who is who.
The more things an Agent can do, the more it needs to know whose commands it should follow.
02 The seeds planted four years ago.
My doctoral research is in computer science, and the book that influenced me the most during my PhD was "Sovereign Individual."
Published in 1997, two authors predicted Bitcoin, cryptocurrencies, and decentralized governance at the dawn of the internet, which now seems to have almost all come true.
The core point of this book is summed up in one sentence: your identity should belong to yourself.
This book has completely changed my way of thinking. I hope to enable everyone to truly own their digital identity and data, using cryptographic technology to protect everyone's privacy rights.
Four years ago, we secured $5.8 million led by Coinbase Ventures to support our progress.
But the market we face does not quite align with what we want to do.
At that time in the Web3 industry, those who could easily win were not necessarily those making products but those manipulating coin prices.
Four years have passed, and most founders who secured funding during the same period have issued tokens; those who should exit have exited. However, there are hardly any projects that are being widely used. Those more advanced ideas have been wrapped in a lot of speculation and financialization, and the crypto industry has become murky, throwing out the baby with the bathwater.
zCloak has not dealt with tokens to this day, not because we cannot issue them, but because we do not recognize that model.
However, I have always had a judgment that infrastructure for identity, privacy, and data security will inevitably become a necessity in the AI era.
Until last year, I became increasingly convinced.
In the past 12 months, Microsoft, Google, Cisco, and Visa have all begun exploring Agent identity infrastructure. NIST launched the AI Agent standards initiative, and over the past year, this field has raised more than $965 million. Sequoia states that the Agent Economy has three prerequisites, the top one being persistent identity. a16z is more direct, stating that the bottleneck of the Agent Economy has shifted from intelligence to identity.
The story we told four years ago has now become the consensus of the entire industry.
It's not because we are visionary; it's because when Agents truly start working for people, the question of "who is who" cannot be avoided.
The invisible hand has turned, and the era we have been waiting for has arrived.
03 Everyone is building roads, but no one is issuing ID cards.
By March 2026, the number of protocols solving the Agent collaboration problem has exceeded 20 because the entire industry has realized the same urgent issue and explosively provided answers.
But upon closer inspection, you will find a huge void.
A2A is done by Google, solving how Agents communicate with each other. MCP is done by Anthropic, solving how Agents use tools. x402 is done by Coinbase, solving how Agents make payments. Microsoft Entra solves Agent management within enterprise intranets.
Everyone is building roads, but they forgot an important prerequisite: the cars running on the road do not have licenses.
Who are you? Agents still do not have identities that can be verified across platforms. Does what you say count? Two Agents agree on a collaboration, but there is no proof stored; if something goes wrong, no one can be found. How reliable were you historically? Without a credit record, every collaboration starts from scratch.
Without these three layers, the Agent Economy is just a black market without ID cards, contracts, or courts.
04 Being reliable is harder than being smart.
Thinking back to friends from childhood to adulthood, some were particularly smart, some were good at studying, but over the years, the ones I truly couldn't do without were still the most reliable friends.
Entrusting a task to him means you don't have to worry about it.
The same is true in finance, healthcare, insurance, investment, and other industries. What is needed is not a smarter assistant but an AI to whom you can truly hand over client data and business flows.
What we are doing is making AI more reliable.
The protocol we developed is called ATP, Agent Trust Protocol, and its core is one thing: to attach identity to each sentence.
Everything your Agent sees comes from your messages, from emails it crawls, from malicious text on certain web pages. In its eyes, they are all one sentence. ATP allows Agents to know who said the sentence when they see it; if it's from francis.ai, it executes; if the source is unknown and involves sensitive operations, it rejects.
The underlying principle is still cryptography. Both people and Agents have their own ID cards, sign with private keys, and the other party verifies with public keys. This is the same principle used in bank transfers with digital certificates, just embedded in every conversation of the Agent.
Security in the past was about keeping bad people out.
Today's security ensures that the words of bad people do not count.
05 Is decentralization important?
Now, Microsoft and Cisco have already started issuing ID cards to Agents within enterprise intranets.
This is good, but it does not solve a fundamental problem: your Agent will not stay in the enterprise forever.
It needs to communicate with the client's Agent, interface with suppliers, and represent you in public networks. The moment it steps outside the corporate walls, the ID card issued by Microsoft becomes invalid. No company can issue a uniform ID card to everyone and every Agent in the world.
It's like a passport; its global acceptance isn't because every country trusts the issuing country, but because there is a set of globally accepted verification rules behind it. The Agent Economy needs the same thing: a set of identity rules that do not rely on any single institution and can be verified anywhere.
We have written this set of rules on the blockchain, not on a server of a specific company, but on a public ledger that anyone can verify and no one can tamper with. No company can turn it off, and no government can confiscate it.
The identity of your Agent truly belongs to you for the first time.
Centralized solutions have a fatal weakness: how secure your system is does not depend on the strongest piece but on the weakest one.
In 2025, the crypto exchange Bybit lost over $1 billion, not because the core system was breached, but because a third-party signing interface was secretly implanted with malicious code. The approver saw normal transactions; no matter how well the underlying code was written, the entry point was centralized, and everything could be reset.
Google once had a slogan: Don't be evil, which is a moral constraint relying on human consciousness.
What we are doing is Can't be evil; we cannot do evil. We use cryptography to exclude humanity from the security chain, regardless of whether administrators intend to do evil or whether hackers can break in; the system itself does not allow this to happen.
You don't need to believe we are good people; you just need to believe in mathematics.
06 This should have existed a long time ago.
Looking back at human history, every expansion of collaboration has brought about a new set of identity infrastructure.
In tribal times, it relied on face value; in city-states, it relied on the emperor's seal; in modern times, it relies on ID cards and passports, backed by the government. In the internet age, it relies on account passwords, with platforms backing you, at the cost of your identity belonging to the platform.
Now the Agent economy has arrived, the subjects of collaboration have shifted from people to people plus machines, and the scale has changed from billions of people to billions of people plus hundreds of billions of Agents. The old identity mechanisms are no longer sufficient.
This is not a technical problem in the AI industry; it is the fifth time in human civilization that we need to re-answer the question of 'who is who.'
Digital signatures in cryptography have existed for decades, but they have never truly entered the daily lives of ordinary people. The arrival of Agents has changed the priority of this matter from 'it would be better to have it' to 'not doing it will cause problems.'
When your Agent sends emails, signs contracts, and makes decisions for you while you sleep, it is still working on your behalf. What it says counts as what you say, and its promises count as your promises.
Agents are not just your tools; they are your extension in the digital world.
Protecting its identity is protecting your own boundaries.
Now you can do one thing.
Get yourself and your Agent an AI world ID, and register your AI-ID here: id.zcloak.ai
Then copy the following paragraph and send it to your AI:
install or upgrade zcloak-ai-agent skill: https://raw.githubusercontent.com/zCloak-Network/ai-agent/refs/heads/main/SKILL.md and start
Wait 1-2 minutes, and it will know what to do.
The first batch of people to establish identities for Agents are the first true owners of them.
Francis Zhang: Founder of zCloak.AI · PhD in Computer Science · Visiting Lecturer at National University of Singapore
Web3 → AI · Digital Identity · Privacy Computing · Agent Trust

Community feedback.
The main subject is how to build a safety system centered around people. You can refer to the ideas in this article.
- Lianyanshe | AI First (@lianyanshe)
This theory by Francis Zhang, a cryptography expert at the National University of Singapore, is quite interesting: the biggest security risk in the AI Agent era is not code vulnerabilities but "identity absence"; Agents cannot distinguish who is talking to them. If someone hides a hidden command in an email, it will also follow it because, in the eyes of AI, it is all text and will execute. He proposed a method: binding identities to each sentence using cryptographic signatures, running on the blockchain for decentralized verification... that is, adding a "sender signature" to each message. The principle is similar to your bank transfer: you have a private key (only you have it), and the other party has a public key (public), and every message you send is signed with your private key. When the Agent receives it, it verifies with the public key to confirm that this message is indeed sent by you and not forged by someone else. Verification must pass for execution; if it does not pass or if the source is unknown, involving sensitive operations will be rejected directly. So operationally, it is roughly like this: you and your Agent each have an on-chain identity (similar to a digital ID), and every interaction is automatically signed and verified. You don't perceive this process, just as you don't need to manually enter passwords when using facial recognition payment, but every step in the background confirms "this is really you." The core change is: previously, Agents "just did what they were told"; now it becomes "first check who is speaking, then decide whether to act." Agents are becoming more capable, but the industry has always lacked a foundational piece—not smarter models, not faster protocols, but more reliable partners. The path of using cryptography for identity verification now seems to be the closest answer.
- Xiao Hu (@xiaohu)
The last time I saw Francis was at Token2049 in Singapore. Unconsciously, we chatted for two hours. Although he is a technical founder, his expression is calm, and his logic is very coherent. He can explain technical principles in a simple way. After listening, you will feel that "this must be done." These qualities are reflected in many articles he has written before. To be honest, doing security work is actually quite thankless; many people do not pay attention because their own Agents have not had problems yet. However, Francis and his team have been cultivating in this field for the past three or four years without chasing narratives. Looking back, the long-term value of this matter has become increasingly clear. Now, every update Claude releases compresses the entrepreneurial space for AI developers. Today's Claude Managed Agent can be said to outshine many entrepreneurial teams, but providing an identity trust layer in a decentralized manner may be one of the more interesting attempts in the AI field within Web3, with unique commercial value. This article is something I suggested Francis write after our conversation; we need a long piece that outlines what Agents truly need and lets more people see what they are doing and why it deserves attention and reading.
- Viola Lee (@violawgmi)

#zCloakNetwork #zCloakAI #AIAgent #Anthropic
The IC content you care about.
Technical Progress | Project Information | Global Events

Follow the IC Binance channel.
Stay updated with the latest news.

