deposits $100 thinking “easy side income” buys a random memecoin from an influencer pick it pumps → feels like a genius tells frenz “crypto is easy money” increases size on the next trade same coin dumps → “I’ll just hold long term” opens YT & IG → searches “best indicator” learns RSI, MACD & Fibonacci starts trading every small move & jumps into another influencer call enters late → exits early → regrets both portfolio becomes random coins with no real plan deposits again to recover losses loses that too
More than 53% of all crypto tokens ever launched are now dead
Coingecko tracked nearly 20.2 million tokens from mid-2021 to end of 2025 and found 53% are no longer actively traded
- 13.4 million tokens failed in total - 11.6 million of those failures happened in 2025 alone, that is 86% of all deaths in just one year - Q4 2025 was the worst quarter, 7.7 million tokens wiped in three months
The main reason was platforms like pumpfun making token creation too easy
A flood of low-effort memecoins with no development, no backing, and sometimes just a handful of trades before going silent
Then in October 2025 a $19B liquidation cascade hit, the biggest liquidation event in crypto history, and it crushed what was already a saturated market
For context, only 2,584 projects failed in all of 2021
The market is wide open to launch anything, and that is exactly the problem
This is not a slow build anymore, institutions and liquidity are arriving at scale
- Circle's USYC: $2B+ supply, 93% of its total on BNB Chain alone - BlackRock's BUIDL fund surpassed $500M on BNB Chain - Franklin Templeton: expanded platform backed by $1.6T AUM - Ondo Finance brought 200+ tokenized US stocks & ETFs onchain - VanEck's VBILL launched tokenized US Treasuries on BNB Chain - Matrixdock: tokenized gold now collateral on Venus Protocol
AI can write code. But can it maintain it over time?
That’s the question a new paper from Alibaba researchers sets out to answer.
They built SWE‑CI, a benchmark that tests AI agents on real‑world code evolution, not just one‑off fixes.
Here’s what makes it different: - 100 real Python codebases from 68 GitHub repos - Each spans ~233 days of development - ~71 commits per project on average
Instead of fixing a bug once, agents enter a continuous integration loop.
They must update code iteratively, adapt to new requirements, and keep everything working without breaking what’s already there.
This shifts the focus: From passing tests once → to sustaining code quality over time From static correctness → to long‑term maintainability
They even introduced a new metric: EvoScore. It rewards stability in later iterations and penalizes regressions as the code evolves.
They tested 18 AI coding agents.
The results tell a different story from the benchmarks.
Most models can write code just fine. Almost all of them struggle to maintain it over time.