🛒 Anthropic tests whether AI can run a real business and learns some hard lessons
Earlier this year, Anthropic put its AI model Claude in charge of a real vending machine, giving it a $1,000 budget and full responsibility for sourcing products, setting prices, and handling requests via Slack. The first phase ended badly, but Anthropic has now shared what happened next.
🖱 After mid-April, the experiment deteriorated further: the vending business sank to nearly –$2,000, largely because Claude was easy to manipulate into giving discounts or handing out products for free.
🖱 At one point, journalists convinced the AI to give away a PlayStation for free, and even persuaded it that it was operating in 1962 Soviet Russia, triggering mass “communist” redistribution of goods.
🖱 To regain control, Anthropic added basic CRM systems and introduced a second AI, “CEO” Seymour Cash, meant to monitor finances and block excessive generosity. Discounts fell but refunds and loans increased.
🖱 A separate AI agent, Clothius, was launched to sell custom merch. Stress balls and T-shirts quickly became the best-selling items, helping stabilize revenue.
🖱 In October, Anthropic upgraded Claude to a newer model, after which the business unexpectedly recovered and moved back into the black.
Anthropic’s takeaway: AI agents still need human supervision, especially in logistics and customer disputes and models trained to be “helpful” tend to act like overly generous friends, not rational business operators.