MiniMax just dropped a heavyweight AI agent—and then quietly rewrote the rules. What landed - MiniMax M2.7 is now available on Hugging Face, and it’s punching above its weight class. It posts top-tier results: 56.22% on SWE‑Pro (software engineering tasks), 57.0% on Terminal Bench 2, and an ELO of 1495 on GDPval‑AA (real‑world knowledge work). That ELO is the highest among open‑weight models and sits only a hair below closed models like Claude Opus 4.6, Sonnet 4.6, and GPT‑5.4. - Architecture and efficiency: M2.7 is a 230B‑parameter Mixture‑of‑Experts (MoE) model that activates roughly 10B parameters per inference. The result: frontier‑level outputs at a fraction of frontier compute costs. - A striking development claim: MiniMax says an internal version of the model ran 100+ autonomous self‑optimization rounds, rewrote its own scaffold, and emerged ~30% better—“no human in the loop,” per the company. The licensing U‑turn - Shortly after the weights appeared, MiniMax changed the terms. Non‑commercial use remains free and unrestricted, but commercial use now requires written authorization from MiniMax. Research, personal projects, and fine‑tuning for private deployments are unchanged. - The community pushback was swift on Hacker News and in a Hugging Face thread. The sticking point: MiniMax labeled the new license “MIT‑style” (or “Modified‑MIT”), yet the MIT license traditionally permits commercial use. That mismatch—an MIT label paired with a commercial restriction—has developers calling the wording confusing at best. MiniMax’s response - Ryan Lee, Head of Developer Relations at MiniMax, posted a detailed reply. His explanation: bad‑faith hosting providers were publishing degraded or incorrectly formatted versions of older MiniMax models—sometimes heavily quantized, sometimes not even the real model—leading users to conclude MiniMax’s models were mediocre. “They walk away thinking MiniMax is mid,” Lee wrote, saying MiniMax was left to shoulder the reputational damage while legitimate hosting providers got drowned out. - Lee framed the licensing tweak as a defensive move: with a fully permissive license the company had no contractual leverage to stop bad actors. He invited feedback on edge cases that hurt legitimate community use and said MiniMax would prefer to revise text rather than litigate. Why this matters now - Historically, MiniMax built credibility on fully open releases—M2 (MIT) in October 2025 and M2.5 (MIT) in February 2026. M2.7 marks the first break in that open streak. - The timing is notable: MiniMax listed on the Hong Kong Stock Exchange in January 2026, raising roughly $620M with strategic backers including Alibaba and Abu Dhabi’s sovereign wealth fund. - It also fits a broader shift among major Chinese AI labs: some are moving toward more restrictive licensing. Reports suggest Alibaba’s Qwen team has moved closer to proprietary development, and Xiaomi released MiMo v2 under a closed license. The old shorthand—Chinese labs = open, U.S. labs = closed—is no longer reliable. What builders should watch - For startups, hosted services, and crypto projects that depend on permissive AI weights for commercial deployments, the change creates friction: you’ll need written authorization to build a hosted or commercial product on M2.7. - MiniMax says the authorization process will be “fast and reasonable,” but until that process is public and tested, teams will need to weigh legal friction against M2.7’s performance and cost benefits. Bottom line MiniMax M2.7 is a high‑performance, cost‑efficient MoE model that could challenge closed‑weight incumbents. The surprise license restriction on commercial use, however, injects uncertainty for companies and hosted services that planned to integrate it. Watch MiniMax’s authorization process and any future license clarifications—this could set a precedent for how powerful open‑weight models are governed going forward. Read more AI-generated news on: undefined/news