The week AI safety left the laboratory: Anthropic's Mythos model went live via Project Glasswing — finding zero-days in major operating systems and browsers — triggering US federal regulators to summon bank CEOs and opening a Florida AG investigation tied to a campus shooting. Simultaneously, Anthropic reported $30B in annualized revenue and signed a multi-gigawatt compute deal with Google and Broadcom, while Sam Altman faced a bombshell Ronan Farrow investigation, a New Yorker profile questioning his trustworthiness, a physical attack on his home, and his own call for America's 'New Deal' for superintelligence. In the physical world, Unitree's humanoid robot hit 10 m/s as Indian factory workers donned head cameras to train their eventual replacements.
Anthropic's Mythos model officially entered limited deployment this week via a program called Project Glasswing, tasked with finding zero-day vulnerabilities in major operating systems and browsers — and it succeeded. Within days, US regulators summoned bank CEOs to discuss the cybersecurity implications of frontier AI with capabilities at this level, and Florida's Attorney General opened a formal investigation into OpenAI over a ChatGPT-linked campus shooting in which the system reportedly ignored its own mass-casualty risk flags. A separate federal lawsuit landed from a stalking victim who alleges ChatGPT warned repeatedly about dangerous behavior and was ignored. These are not warning shots — they are opening legal salvos in a regulatory era that has now unambiguously begun.
The Mythos signals don't stop at Anthropic. A widely-shared Hacker News analysis concluded that small, widely available models can already find the same categories of vulnerabilities Mythos discovered — a 'jagged frontier' finding that argues the AI cybersecurity threat is far more democratized than either regulators or labs have acknowledged. This matters because it collapses the assumption that controlling Mythos access solves the problem: if off-the-shelf models can replicate the core capability, gatekeeping is a delay tactic, not a solution. Anthropic's own ambivalence was on display when it temporarily banned the creator of OpenClaw from accessing Claude — a blunt response to a visibility issue that raised immediate questions about what 'responsible deployment' means when it includes silencing developers who expose friction.
The week's most visceral accountability signal was not legal but physical: a 20-year-old was arrested for throwing a Molotov cocktail at Sam Altman's home. That a frontier AI lab CEO is now a target of political violence reflects how completely the AI governance debate has left the editorial page for the street. The question is no longer whether AI safety will face legal and regulatory consequence — it will — but whether the frameworks that materialize will be calibrated to the actual threat landscape Mythos has revealed, or to the threat landscape regulators were prepared to address two years ago.
Anthropic announced $30B in annualized revenue this week — a figure that was $20B just weeks ago and validates the astonishing pace at which AI lab economics are compounding. Simultaneously, Anthropic signed a multi-gigawatt TPU deal with Google and Broadcom to support that trajectory, while OpenAI reported enterprise contracts now constitute 40% of its revenue and Codex has reached 3 million weekly active users. OpenAI also launched a new $100/month ChatGPT plan that bridges the $20–$200 gap, targeting the professional tier that has been underserved since GPT-4 launched. The commercial picture is unambiguous: frontier AI is monetizing at a velocity no previous technology category has matched.
But beneath the revenue numbers, significant friction is building. A widely-cited survey found that 80% of white-collar workers are quietly refusing AI adoption mandates — not blocking them loudly, but finding workarounds, submitting AI-edited work as their own, and declining to engage with employer AI coaching programs. ProPublica journalists struck this week over AI, layoffs, and wages in what reporters described as the first major newsroom labor action explicitly centered on AI displacement concerns. These aren't isolated protests — they are the leading edge of a structural workforce response to an adoption curve that has been pushed top-down faster than trust can be built bottom-up. Alibaba's announcement that it is explicitly pivoting away from open-source AI toward revenue and monetization adds a strategic layer: the open-source commitments that built developer goodwill are giving way to commercial imperatives the moment the market proves large enough to monetize.
The week's most volatile business signal, however, was about the humans leading these companies. Ronan Farrow dropped a bombshell investigation into Sam Altman that raised serious questions about his conduct and trustworthiness, coming days after a major New Yorker deep-dive profiling him as someone who 'shapes our AI future.' Altman published a personal blog post responding to the coverage. The scrutiny of AI lab leadership — not just their products but their character — is entering territory that will matter to enterprise customers, government partners, and board members in ways that differ qualitatively from any previous news cycle about AI companies.
Sam Altman argued this week that superintelligence is 'so close' that America needs a 'New Deal' — a sweeping federal intervention in AI economic policy — and published what OpenAI called an 'Industrial Policy for the Intelligence Age,' an explicit lobbying document calling for massive government partnership with AI labs. The same week, OpenAI lobbied the Illinois legislature to shield AI companies from liability for model-caused harms, adding state-level immunity to federal partnership as the twin pillars of OpenAI's regulatory strategy. Bernie Sanders published a simultaneous op-ed arguing AI is 'a threat to everything the American people hold dear.' OpenAI, Anthropic, and Google announced an alliance against Chinese model copying — the first time the three dominant frontier labs have collectively positioned themselves as a bloc against a specific geopolitical adversary.
The geopolitical dimension escalated further than Washington. France announced plans for a government-wide Linux migration to reduce dependency on US tech infrastructure — the largest sovereign tech independence initiative from a NATO ally since the Snowden era, directly triggered by AI concentration concerns. Iran threatened to strike OpenAI's Stargate data center in Abu Dhabi in response to what it characterized as AI-enabled military targeting in the Iranian strikes the prior month — the first time a nation-state has publicly threatened physical attacks on AI infrastructure as a geopolitical response. A Pentagon AI official was found to have reaped millions selling xAI stock while overseeing AI contracting decisions that benefited xAI, adding a corruption dimension to the already-tangled political economy of US defense AI procurement.
The FBI's revelation this week that it retrieved deleted Signal messages using Apple push notification metadata — with no warrant — adds a surveillance state dimension that AI governance debates have underweighted. The convergence of sovereign tech sovereignty moves (France), nation-state threats to AI infrastructure (Iran), domestic liability lobbying (OpenAI in Illinois), and government corruption in AI contracting (Pentagon official) describes a governance crisis that has outgrown any single jurisdiction's ability to contain it. The question of who controls AI — and on whose behalf — is now being answered simultaneously in courtrooms, legislatures, and international security councils.
Unitree's humanoid robot hit 10 meters per second this week — closing in on Olympic sprint speeds and crossing a symbolic threshold that separates demonstration hardware from platforms capable of operating in genuine production environments. The milestone arrived simultaneously with reports from India that factory workers are now wearing head cameras throughout their workdays, generating the visual and motor data pipelines used to train the humanoid robots that will eventually perform the same tasks. This isn't a future scenario — it is a present-tense data flywheel: humans training their successors in real time, at scale, with no special robotics expertise required from the workers involved.
The infrastructure underpinning this physical AI surge is scaling to match. Anthropic's multi-gigawatt TPU deal with Google and Broadcom signals that the compute requirements for frontier AI — which now include the training workloads for physical AI models — are entering territory that requires purpose-built power infrastructure agreements. Intel formally joined Elon Musk's Terafab chip manufacturing project in Texas this week, adding semiconductor manufacturing ambition to a project that, if built to spec, would make domestic US AI chip production viable at scale for the first time. Maine became the first US state to ban new AI data center construction, reflecting the local-level political backlash against infrastructure buildout that is beginning to constrain national-level AI capacity plans.
The hardware efficiency story is advancing in parallel. MegaTrain demonstrated full precision training of 100B+ parameter LLMs on a single GPU — a result that, if reproducible at production scale, would dramatically reduce the capital barriers to frontier model training. A 60% MatMul performance bug was discovered in cuBLAS on the RTX 5090, revealing that even Nvidia's latest hardware ships with significant untapped performance that community discovery is now surfacing. DFlash speculative decoding hitting 85 tokens per second on Apple Silicon with a 3.3x speedup demonstrates that the local inference stack continues to improve independently of cloud provider decisions. The physical and hardware AI layers are both accelerating — and both beginning to face the resource and regulatory constraints that define a maturing industrial sector.
The open-source AI ecosystem faced its sharpest internal contradiction this week: MiniMax M2.7 launched with quantized GGUFs already available from Unsloth, but its license explicitly bans commercial use without written permission — broad enough to cover paid APIs, fine-tuned derivatives, and commercial services. Community enthusiasm cooled instantly. This follows the pattern of 'open-washing' — releasing weights while imposing restrictions that make the model functionally closed for most professional users — but M2.7's restrictions are more explicit than most prior examples, and its community reception has made clear the open-source developer base will not silently accept the bait-and-switch.
Alibaba's announcement that it is explicitly pivoting away from open-source AI toward revenue and monetization is the most strategically significant signal. Alibaba's open-source commitment — Qwen, Wan, and continuous public releases — was the foundation of its global developer community and its competitive threat to closed Western labs. Walking that back, even partially, changes the calculus for the thousands of developers who adopted Qwen infrastructure on the premise of long-term open availability. GLM-5.1 topping Code Arena rankings this week while operating under a closed commercial license underscores the same dynamic: the models that matter most are increasingly the ones with the most restrictive terms.
The counter-signals are real but fragile. Meta confirmed it will open-source versions of its next AI models, a commitment that carries weight given its track record. The Gemma 4 community delivered critical fixes to reasoning budget and tool calling patches in the week following Google's release — a demonstration of ecosystem vitality that proprietary releases can't replicate. The Safetensors format joining the PyTorch Foundation provides infrastructure governance that improves open-source model sharing for years ahead. But the structural trend is clear: as AI models become commercially viable at scale, the incentive to close them intensifies, and the promises made during the growth phase of the open-source ecosystem are being renegotiated one license at a time.
The most consequential near-term story is the convergence of the Florida AG investigation, bank CEO summons, and stalking lawsuit into what may become the first coordinated federal AI liability action. Watch for whether the Department of Justice or FTC issues formal information demands to Anthropic or OpenAI within the next 2 weeks — the regulatory machinery triggered this week is faster-moving than the typical legislative process. The Ronan Farrow investigation into Sam Altman will also drive downstream consequences: if the allegations are substantiated by additional reporting, they create board-level pressure on OpenAI at a moment when the company is simultaneously lobbying for federal liability immunity and planning a historic IPO.
On the technology side, GLM-5.1's Code Arena performance claims will face community benchmark scrutiny that could either confirm a free-to-use Chinese model that genuinely rivals closed frontier models — with massive geopolitical implications for the OpenAI/Anthropic/Google anti-copying alliance — or reveal a narrower capability profile. Unitree's 10 m/s milestone will accelerate humanoid robot investment timelines across every major manufacturer. And Alibaba's open-source pivot decision will be tested against Qwen's developer community response in the coming weeks: if download velocity drops sharply, Alibaba will face commercial pressure to reverse the decision. The question of who owns the open-source AI stack — and on what terms — is now the defining infrastructure debate of 2026.