A new ProPublica investigation warns that the federal government is repeating the same rush-to-adopt mistakes with artificial intelligence that it made a decade ago with cloud computing — and those errors could leave agencies exposed once again. What happened before Renee Dudley’s April 6 piece argues the Biden and Trump administrations’ push to get agencies using AI from big tech looks eerily familiar to the Obama-era scramble to move government systems to the cloud. The White House has framed AI adoption as a national competitiveness priority, and agencies are being offered cheap, easy access to powerful models — OpenAI’s ChatGPT for $1 per user, Google’s Gemini for $0.47, and xAI’s Grok for $0.42. The speed and low cost echo how cloud deals were sold in the early 2010s: transformational, urgent, and cost-saving on paper. Three cautionary lessons - Free offerings can be lock-in. Microsoft’s 2021 promise to give the federal government $150 million in security services looks, in practice, like a strategic way to entrench its products inside agencies. Once agencies accepted the free upgrades, switching vendors would be costly and disruptive. “It was successful beyond what any of us could have imagined,” a former Microsoft salesperson told ProPublica. Even Microsoft and OpenAI have since disagreed publicly over contract terms, illustrating how fraught AI partnerships can be — even for the companies involved. - Oversight needs funding and staff. FedRAMP, the Federal Risk and Authorization Management Program created in 2011 to vet cloud services, was worn down for years to approve a major cloud product despite cybersecurity concerns. ProPublica reports FedRAMP now operates “with an absolute minimum of support staff” and “limited customer service.” The GSA defends the program, saying it “operates with strengthened oversight and accountability mechanisms,” but former employees described it as increasingly unable to scrutinize products rigorously. - Independent reviews have conflicts. As FedRAMP’s in-house capacity shrank, third-party auditors picked up much of the vetting work — and those firms are paid by the companies they audit. Understaffed agencies often rely on those third-party certifications rather than conducting their own deep reviews, creating a structural conflict of interest and less reliable oversight. Why this matters for AI and crypto observers The risks aren’t just bureaucratic. Dudley warns the downsizing of oversight capacity has “far-reaching” implications for federal cybersecurity as agencies start using AI tools that can process highly sensitive data under the same weakened framework that struggled with cloud security. The GSA has cautioned that AI “usage costs can grow quickly without proper monitoring and management controls,” and it recommends setting usage caps and reviewing consumption — but those steps don’t solve the deeper problems of underfunded regulators, vendor-dependent audits, and limited leverage once technology is embedded. For crypto and web3 communities watching governance of digital infrastructure, the parallels are clear: when governments adopt powerful tech quickly and cheaply, they can become dependent on a few large vendors while lacking the staff and independent review needed to manage risk. The result is a governance gap at a moment when AI systems are increasingly central to how public services and sensitive data are handled. Read more AI-generated news on: undefined/news