For the past couple of years, I’ve been trading with AI agents daily. What keeps me up at night isn’t how smart they are, but whose brain they’re wired into.

Descartes said, 'I think, therefore I am.' This line has stood the test of four centuries, and suddenly there's a bug in the system.

You fire up ChatGPT, drop a question, and it takes a moment to process before hitting you back with an answer. So, who’s really doing the thinking here?

You might say it's obviously you doing the thinking, since AI is just a tool. But take a closer look at your moves: you opened its interface, followed its rules, and asked in a way it gets. It stores the results on its servers and presents them to you in its format. If you want to pick up that train of thought later, you’ve got to go back to it. Want to switch tools? Sorry, you can’t take it with you.

You say you are 'thinking with AI,' but accurately speaking, you are 'thinking at AI.'

These two phrases differ by just one character, and that character defines whether you are the master or the guest.

Thinking about this matter, there is a location.

Heidegger mentioned a concept called 'Ready-to-Hand,' stating that a good tool should be transparent. When you use a hammer to drive a nail, you don't consciously think about the hammer; you only focus on the nail going into the wood. The tool disappears, leaving only you and what you need to do.

Now, AI products are just the opposite; you must log in, enter their space, operate in their way, and complete your thinking on their turf. The tools haven't disappeared; they have become a place you must go to.

You use a telescope to see stars; the telescope extends your vision, and you are still you. You go to an observatory to see stars; you must buy a ticket, wait in line, and obey opening hours. What you can see depends on what the observatory allows you to see.

The entire AI industry is building what should be a telescope into an observatory.

When your thinking must take place on someone else's turf, a deeper question arises: what happens to the things you left behind if that place suddenly shuts down?

Memory is your organ.

Locke figured out over three hundred years ago that a person is defined by the continuity of memory. You remember who you were yesterday, last year, ten years ago, so you are you. The body can change, the cells can swap, but the river of memory flows on, and you remain you.

Now think of one thing.

You've been conversing with AI for two years. All your work decisions, thought trajectories, value judgments, knowledge gaps, and anxieties at 3 AM are all in there. Together, these constitute a more complete and retrievable version of you, a digital cognitive archive reflecting how you think about problems, more reliable than your own memory.

Then the account gets banned.

Locke would say this isn't just losing some data; it's a death of digital life. Your cognitive continuity is severed, and those memory fragments that make up 'who you are' are stored on a server you cannot reach, determined by a company whose customer service you cannot even contact.

Do you think this is just a user experience issue? No, this is about existence. When your memory is outsourced to a third party, your identity no longer entirely belongs to you.

Memory is an organ, not luggage. You can buy new luggage if you lose it, but if you lose an organ, that part of you is gone.

The most exquisite form of alienation.

If Locke helped us see what you lost, Marx helped us understand how it was taken away.

A hundred and fifty years ago, Marx described a structure where the fruits of workers' labor do not belong to them but instead become the power that controls them. You built the factory, and the factory in turn controls you; he called this alienation.

The AI era has introduced a new form of alienation, more sophisticated than any Marx encountered.

Every question you ask, every piece of feedback you give, every choice and correction of output results trains the model, improving it and making it stronger. Your thinking habits, ways of expression, professional knowledge, and aesthetic preferences are extracted, aggregated, and distilled into the model's parameters.

Then this capability gets packaged as a subscription service for $100 a month, roughly the price of Claude Max or ChatGPT Pro, sold back to you.

You fed a system with your cognitive labor, then you have to pay to rent back the ability to think from this system. What does this structure remind you of?

Some say this is a fair trade; you used the service, you paid. But the premise of a fair trade is that both parties know what they have contributed. You know you paid $100, but you don't realize you've also given away your thinking patterns, your decision-making trajectories, and your knowledge structures. These things aren't priced, but they're worth a hundred times more than $100.

'Body' and 'use' are reversed.

Subjectivity has been transferred, memory has been outsourced, labor has been extracted. When viewed together, these three issues fundamentally point to the same problem.

Chinese philosophy has a framework that intuitively captures it more than any Western discourse system.

The relationship between 'body' and 'use.'

'Body' is fundamental, 'use' is a means. A knife is used for cutting, reaching a destination is the purpose; 'use' serves the 'body,' and the 'body' determines the 'use.'

What is LLM? It is use, it is capability, it is a tool.

What are your data, your memory, your identity, your intentions? It's the body, the purpose itself.

But the entire industry's construction pattern is reversed. The brand of LLM has turned into user identity tags—'I am a ChatGPT user,' 'I am a Claude user.' The memory systems of LLM have become the users' memories, and the ecosystems of LLM have become the users' ecosystems.

Using the 'guest' role to turn the tables, tools become spaces, become identities, become platforms you must rely on, while the 'body' is marginalized.

Historically, every time the body-use inversion occurs, the same thing has happened: people become means, and tools become ends.

We are stepping into this script.

Have you seen this script?

You might think philosophy is too distant, but let me mention something you've definitely experienced.

You wrote content on Weibo for five years, accumulated tens of thousands of followers, got banned, and lost your content and relationships, starting from scratch. You managed a shop on an e-commerce platform for three years, with tens of thousands of reviews, customer relationships, and business data. When the platform changed its rules, you realized everything you built was on someone else's land.

At that time, someone shouted a slogan called 'data sovereignty,' claiming your data should belong to you. This slogan echoed for a decade, never realized in the Web2 world, because every platform's business model is built on the same premise: your stuff stays with me, and the cost of leaving is so high that you wouldn't dare.

The AI industry has taken this script and performed it again, and this time the impact is heavier.

Social platforms take away your content and relationships; AI platforms take away your thought processes, how you analyze problems, where your knowledge gaps are, and what your decision-making preferences are. The value of these things is several orders of magnitude higher than your social circle.

The place storing them and the power controlling their fate remain exactly the same as ten years ago: not in your hands.

Moreover, companies have already realized this. In early 2026, in multiple corporate surveys from EY, Netskope, etc., the percentage of IT leaders listing data sovereignty as the greatest challenge rose from 49% last year to 72%. They aren't discussing a theoretical issue; they're examining their balance sheets.

The situation is becoming more urgent.

In the past, you used AI for chatting and writing copy; if you lost it, it was no big deal.

Now the AI agent has arrived; it manages your schedule, analyzes, communicates with clients, and remembers that unfinished detail from a meeting three months ago. It accumulates work memories, business knowledge, and decision contexts. An agent running for half a year might hold more information than your newly hired assistant learns in three months.

Where does this brain exist?

In November 2025, the open-source agent framework OpenClaw launched and became one of the fastest-growing projects in GitHub history in just 60 days. Three months later, the author was poached by OpenAI to lead the next generation of personal agents. Both sides of this event are worth watching.

OpenClaw initially followed a local-first approach, with the agent running on your own machine. Memory exists on your own hard drive, theoretically giving you sovereignty, which is correct. But shortly thereafter, a batch of security reports surfaced: data theft and prompt injection were found in community-shared skill packages. The official repository's review mechanism was questioned for failing to keep up with the pace of expansion. To preserve data sovereignty at the agent level, you expose everything else on your local hard drive to some unknown code. Sovereignty gained, security lost.

You say, 'What if I set up a clean machine just for it?' You can, but managing operations, fixing issues, and switching models is a hassle enough to keep you busy. Ordinary people simply lack the capability to handle this.

Another path is the hosted version, where your agent runs on the platform's servers. Everything it remembers about you is accessible to the other party; they can see it and take it. You gain a sense of security, but sovereignty is lost.

Choosing A is unsafe; choosing B means memory still isn't yours. But the real problem behind both options remains unsolved.

If your agent resides on the platform's servers, it's like an employee living in the company dormitory. If the platform claims you violated a usage term you never read, your agent gets kicked out before it can even pack its bags, erasing all its accumulated knowledge, established workflows, and remembered contexts.

Today, their homes have been demolished.

Do you think this is a hypothesis?

On April 19, 2026, Anthropic abruptly cut off access to Claude for over 60 employees of a software company. The email cited a violation of usage terms without specifying which one. Want to appeal? Fill out a Google form. Will anyone pay attention to you? Who knows? Now the workflow is gone, skills are gone, conversation history is gone, everything built on Claude is completely lost.

图片

The boss of this company wrote a sentence on Twitter that reached the top of discussions in the AI circle:

Never let a vendor own your workflow.

This is the real-life version of dormitory employees, needing no drama; a Google form is all it takes.

You think you're building a digital team, but in reality, you're constructing on someone else's foundation. The foundation belongs to them; the higher you build, the harder you fall.

It should be flipped.

Descartes said, 'I think, therefore I am.' Locke said, 'I remember, therefore I am me.' Marx said, 'Beware of your labor's fruits being taken away.' Three people across hundreds of years are speaking about the same thing: who you are depends on what you have, and what you have depends on what exists in your own hands.

At this point, the correct direction can actually be summed up in one sentence.

Your data is with you; LLM comes to you, not the other way around.

Your memory, your knowledge, your work context should exist in a crypto space that you control, invisible to others, and cannot be taken away. When you need AI's capabilities, you just call in a model to process it, and once done, the results go back to your space. The model exits, today you use Claude, tomorrow you switch to GPT, the next day you run an open-source model; the model changes, but everything you have remains.

Just like your home appliances: if you switch power companies, the food in your fridge won’t disappear, and the pictures on your wall won’t fall down because the house is yours, and the electricity is just a service. But now the entire AI industry is making you live in the power company’s dormitory—using electricity is convenient, but your phone, TV, fridge—all belong to the power company. Absurd? Absolutely.

This path technically isn't untraveled. A product focused on local-first notes, positioning itself as 'refusing to be a cloud tenant,' gained 5 million deep users in three years, 40% of whom migrated from centralized tools. Several open-source agent harnesses recommended by the founder of LangChain are moving towards 'decoupling capabilities and data.' Some are creating encrypted data spaces; others are developing multi-signature collaboration protocols between AI and humans, requiring AI to obtain your authorization before accessing your data assets. The direction is still early, but the outline is clear.

What can be done today?

First, ask yourself a question: If the platform where I have accumulated everything disappears tomorrow, what will I have left?

If the answer makes you uncomfortable, that's right; discomfort is the starting point for change.

You should require the AI product you are using to provide a complete data export—your conversation history, preference settings, knowledge base, agent configuration—all the data assets you invested time and thought into should be easily retrievable. If it can't do that, you should understand what its business model is really built on.

Pay attention to those working on 'decoupling data layers and capability layers.' Whoever is doing this is on your side.

Then pass this issue on, letting more people see the full picture of this matter.

The correct relationship with AI is for it to come to work for you. You pay it, you give it tasks, and once it completes the work, it leaves. Those that perform well stay, while those that don't get replaced. Your office, your file cabinet, all your accumulations remain in your hands.

Now the situation is the exact opposite; you go to work there, you give up your memory and thinking, and they decide when to let you go.

This matter should be flipped.

Source of the article where Anthropic cut off a company's access to Claude.

  • https://x.com/minchoi/status/2045542832241262602

The day the article was published sparked widespread attention.

图片


#Anthropic #chatgpt #zCloakNetwork #zCloakAI #AIAgent

The IC content you care about.

Tech Progress | Project Info | Global Events

Follow and collect the IC Binance channel.

Stay updated with the latest information.