Dera News
Back to Insights
dera monthly editionMarch 2026 / Vol.-

The month things broke while running.

January's theme was delegation — AI doing real work stopped being hypothetical. February was orchestration — AI directing AI, while Anthropic told the Pentagon "No." March revealed the cracks that speed creates. The company that said No got breached. The most viral AI app collapsed under its own compute costs. A federal court had to step in. Performance and capital kept accelerating, but so did the failures.

Issue file

Date

March 2026

Editor's Note

January's theme was delegation — AI doing real work stopped being hypothetical. February was orchestration — AI directing AI, while Anthropic told the Pentagon "No." March revealed the cracks that speed creates. The company that said No got breached. The most viral AI app collapsed under its own compute costs. A federal court had to step in. Performance and capital kept accelerating, but so did the failures.

The faster you accelerate, the bigger the debris when something breaks. March made this visible through three fractures: a safety company's security failures, a viral product's economic collapse, and a court intervening in the industry's ethical fault line. Evaluate AI vendors on operational security, not just model performance. Track compute costs daily, not monthly. Define your criteria for slowing down with the same seriousness you define your criteria for speeding up.

What changed this month

Anthropic

Before: In February, Anthropic positioned itself as the AI safety standard-bearer by publicly refusing the Pentagon's demand to remove safety guardrails. Safety was framed as a competitive advantage.

Now: In March, that same company exposed a next-gen model's existence through a CMS misconfiguration and shipped its entire source code to npm through a missing .npmignore entry. The gap between 'saying safety' and 'implementing safety' became visible.

Source code exposed 512,000 lines

Anthropic / OpenAI

Before: In February, the Pentagon demanded Anthropic remove safety guardrails. Anthropic's CEO publicly refused. OpenAI signed a Pentagon deal hours later. The ethical fault line was exposed but unresolved.

Now: In March, a federal court intervened. The judge called it "classic illegal First Amendment retaliation" and blocked the ban. Meanwhile, OpenAI closed a $122 billion funding round, and the industry polarized between capital and ethics.

QuitGPT participants 2.5 million

OpenAI

Before: Through February, the conversation centered on model commoditization and price compression. The sustainability of AI products themselves wasn't a central concern.

Now: March proved that even viral success can't survive bad unit economics. Sora's specific numbers — $15M/day in compute against $2.1M lifetime revenue — made the lesson concrete. OpenAI itself started cutting unprofitable products ahead of its IPO.

Daily compute burn $15M/day

The issue

Escalation / Anthropic01Revises dera's view

The Safety Company Broke Twice in Five Days

Contrarian — Anthropic

On March 26, a configuration error in Anthropic's internal CMS left roughly 3,000 unpublished assets accessible from the public internet. Among the leaked materials: the existence of "Claude Mythos" (internal codename "Capybara"), a next-generation model described internally as a "step change" in capabilities. Internal benchmarks showed it dramatically outscoring Claude Opus 4.6 across coding, reasoning, and cybersecurity tasks.

Five days later, on March 31, Claude Code's source code shipped to npm inside the published package. A missing .npmignore entry meant a 59.8 MB source map — 1,900 files, 512,000 lines of unobfuscated TypeScript — went out to every developer who installed the update.

The community response was immediate and chaotic. A clean-room rewrite project launched within hours, hitting 50,000 GitHub stars in two hours — one of the fastest-growing repositories in GitHub history. Simultaneously, attackers exploited the confusion: a trojanized axios package was distributed between 00:21 and 03:29 UTC, compromising developers who updated during that window.

Anthropic might have world-class model safety, but they tripped on basic DevOps. One missing line in .npmignore was all it took.

Hacker News user

Counterpoint

Anthropic responded to both incidents within hours, identifying the blast radius and shipping fixes. Model safety research and operational security are different disciplines — a CMS misconfiguration doesn't invalidate years of alignment work. But it does raise questions about organizational maturity.

Escalation / Anthropic / OpenAI02Supports dera's view

The Company That Said No Got Protected by a Court

Contrarian — Anthropic / OpenAI

Events moved fast after the Pentagon designated Anthropic a "supply chain risk" in late February. On March 3, the QuitGPT boycott crossed 2.5 million participants. ChatGPT uninstalls spiked 295% in a single day. That same day, Claude hit #1 on the App Store.

On March 9, Anthropic filed suit against the Trump administration. OpenAI's Sam Altman acknowledged the Pentagon deal "looked opportunistic and sloppy," announcing a contract amendment to prohibit domestic surveillance of US persons. But internally, CNN reported staff were "fuming."

On March 26, a federal judge granted Anthropic a preliminary injunction. The ruling called the Pentagon's response "classic illegal First Amendment retaliation" and blocked enforcement of the ban. It was the first time an AI company's refusal of a government demand on safety grounds received legal protection.

I'll admit the Pentagon deal looked opportunistic and sloppy.

Sam Altman (OpenAI CEO), CNBC

Counterpoint

The injunction is preliminary — the full trial could go differently. And even if Anthropic wins, the relationship with the Pentagon is likely damaged beyond repair, potentially locking them out of government contracts for years.

Reframed / OpenAI03Revises dera's view

The Most Viral AI App Ever Made Just Shut Down

Contrarian — OpenAI

On March 24, OpenAI announced it would shut down Sora. The app closes April 26; the API shuts down September 24. At launch, Sora went viral across every social platform. It peaked at roughly 1 million users. Behind the scenes, it was consuming $15 million per day in compute.

Lifetime revenue: $2.1 million. Less than a single day's costs. Users peaked at 1 million, then collapsed below 500,000 with no recovery in sight. The Disney partnership — reportedly worth over $1 billion and spanning Marvel, Pixar, and Star Wars properties — was terminated with little advance notice.

Sora's shutdown wasn't an isolated event. OpenAI is running a "product purge" ahead of its planned IPO (targeting Q4 2026, though the CFO has called 2027 more realistic). Sora was the first casualty. The same month saw GPT-5.4 launch in three variants and a $122 billion funding round close — clear signals that OpenAI is pivoting from consumer experimentation to enterprise productivity.

Sora was technically stunning but never worked as a business for a single day. There's no way to justify $15M/day in compute against $2.1M in lifetime revenue.

TechCrunch analysis

Counterpoint

OpenAI has hinted at repurposing Sora's technology as an enterprise video generation API. A consumer app failing doesn't mean the underlying technology is worthless — it may just need a different business model.

DISRUPTION — Voices of change

One missing line in .npmignore and 512,000 lines leaked. Trust needs to be replaced with verification.

Hacker News

The clean-room rewrite hit 50K stars in 2 hours. Open source moves at a terrifying speed.

Reddit

A company that refused a government on safety grounds just got legal protection. That's a precedent.

Reddit

I joined QuitGPT. Never thought an ethical choice between AI providers would be this clear-cut.

Reddit

$15M/day compute for $2.1M lifetime revenue. This is a textbook unit economics failure.

Hacker News

The Disney deal getting killed with no warning is the real shock. A billion-dollar partnership just vanished.

Reddit

LIMITATION — Voices of caution

Voices are still being collected.

Decision Board

Questions and execution guidance

This is the execution section: finalize what to test now and what to govern before rollout.

This Month's Questions

The faster you accelerate, the bigger the debris field when something breaks. Define your deceleration criteria with the same rigor as your acceleration criteria.

  1. 1Is the "safety" of the AI vendor you trust a statement or an implementation?
  2. 2Is your AI investment chasing virality or unit economics?

Editorial View

  1. 1The faster you accelerate, the bigger the debris when something breaks.
  2. 2March made this visible through three fractures: a safety company's security failures, a viral product's economic collapse, and a court intervening in the industry's ethical fault line.
  3. 3Evaluate AI vendors on operational security, not just model performance.
  4. 4Track compute costs daily, not monthly.

Continuity

How the monthly arc moved

This section shows where the monthly arc stood as of March 2026. It is the issue's endpoint at the time, not a live status view.

January 2026

Delegation

11 plugins wiped ~$285B in SaaS market cap / OpenClaw put an AI agent on every developer's desk

February 2026

Agent integration layer com…

Agent integration layer competition / Model race price collapse

March 2026

The Safety Company Broke Tw…

The Safety Company Broke Twice in Five Days / The Company That Said No Got Protected by a Court…

Nearly selected

What almost made the issue

This issue closed tightly around the final three stories.

Open questions

Is the "safety" of the AI vendor you trust a statement or an implementation?

Is your AI investment chasing virality or unit economics?

Method

This edition was compiled by reconciling dera editorial memory, dera reporting from the month, and outside community research.

Colophon

This edition was compiled by reconciling dera editorial memory, dera reporting from the month, and outside community research.

Source notes

Monthly note

📩 This report is published monthly. Subscribe to dera news (free)

💼 Want help implementing AI? Talk to dera

Week by week

A week of reckoning for AI agent security and business modelsMixed

Week 14

This week, the AI landscape was rocked by an unprecedented series of security crises and business model pivots across leading agent platforms, with OpenClaw and Anthropic both facing high-profile vulnerabilities, source code leaks, and abrupt pricing changes. Meanwhile, Google and Hugging Face doubled down on local deployment and open customization, launching new models and tools that promise to slash operating costs and put more control in practitioners’ hands. As no-code AI-native platforms like Softr enter the mainstream and new security tools emerge to counteract rising leak risks, the industry is forced to confront the true cost—and fragility—of rapid AI integration.

Articles
9
Companies
22
Topics
43

When your agent’s code leaks and pricing doubles overnight, who do you really trust to run your business AI?

Dispatch vs. Pentagon vs. OpenClaw: AI collisions define the weekOptimistic

Week 13

This week, Anthropic launched Claude Dispatch (Code Channels), letting practitioners control AI coding remotely — a direct answer to OpenClaw that has the community buzzing. Meanwhile, the Pentagon banned Claude, then the US military reportedly used it in Iran strikes hours later, raising urgent questions about AI vendor risk. In China, OpenClaw fever reshapes the tech landscape as local governments and Tencent race to build rivals. NVIDIA GTC brought NemoClaw for secure enterprise agents, Mistral shipped Small 4 and Forge, and Bezos signaled a 100B bet on AI-driven manufacturing.

Articles
10
Companies
0
Topics
0

Is unified, enterprise-ready AI the new baseline—or just the next walled garden?

Open-Source Model Acceleration Meets Secure Agentic InfrastructureOptimistic

Week 12

This week, the defining new signal is the rapid convergence of open-source, highly efficient AI models—led by Mistral Small 4, Xiaomi's MiMo-V2-Flash, and Mamba 3—being independently released and immediately integrated into hands-on developer tools, with a parallel surge in secure, enterprise-ready agentic AI infrastructure from NVIDIA and Google, all covered by multiple credible outlets and now directly enabling practitioners to build, customize, and deploy agent-driven workflows at unprecedented speed and scale.

Articles
10
Companies
19
Topics
49

If open-source AI is now enterprise-ready, what’s left for closed models to defend?

The multi-agent AI platform wave arrivesOptimistic

Week 11

This week, multiple leading companies—including NVIDIA, Meta, and OpenClaw—independently launched or announced open-source platforms and standards specifically designed for multi-agent AI systems, marking the first coordinated industry push to make AI agents interoperable, portable, and community-driven across both enterprise and developer ecosystems.

Articles
10
Companies
31
Topics
46

With agents going open and interoperable, is the era of AI silos finally over—or just beginning in new disguise?

The agent infrastructure moment: cloud giants and dev tools converge on deployable, persistent AI agentsOptimistic

Week 10

This week, the defining shift is the mainstreaming of autonomous, stateful AI agents—now deployable at scale across cloud platforms and developer tools—heralded by Amazon Bedrock’s launch of a stateful runtime for agents, GitHub Copilot’s new autonomous coding features, and a surge in real-world agent integrations from design-to-code (Figma x Codex) to workflow automation, all independently covered by multiple credible sources.

Articles
9
Companies
18
Topics
43

Are persistent AI agents becoming the new cloud infrastructure—and what happens when code, memory, and orchestration live everywhere?

Monthly synthesis

The month of March 2026 marked a significant maturation of the AI landscape, characterized by the transition of AI agents from prototypes to deployable, enterprise-ready infrastructure. This was fueled by a rapid convergence of open-source, efficient AI models and robust, secure agent platforms, signaling an industry-wide pivot towards practical, scalable, and cost-effective AI solutions over purely consumer-focused features.

AI Agents as Deployable, Enterprise-Ready Infrastructure

Began with Amazon Bedrock and GitHub Copilot launching stateful, autonomous agents as cloud infrastructure (Week 1). Evolved to include open-source multi-agent frameworks and interoperability standards from NVIDIA, LangChain, and GitAgent (Week 2). Further progressed with NVIDIA and Google rolling out secure, enterprise-ready agentic infrastructure (Week 3), culminating in enterprise-grade platforms like NVIDIA's NemoClaw and OpenAI's Astral acquisition focusing on robust runtime environments and security (Week 4).

Explosion and Integration of Open-Source, Efficient AI Models

Started with open-source multi-agent frameworks and standards (NVIDIA Nemotron, GitAgent, OpenClaw) aiming for interoperability (Week 2). This rapidly accelerated with the release of lightweight, high-performance open-source models like Mistral Small 4, Xiaomi MiMo-V2-Flash, and Mamba 3, which were immediately integrated into developer tools (Week 3). By Week 4, these unified, multi-modal open-source LLMs were actively lowering the barrier for SMBs and developers, offering both flexibility and simplified integration.

Industry Pivot Towards Enterprise, Security, and Cost-Effectiveness

Initial emphasis on built-in security and memory management for deployable agents (Week 1). This expanded to the rapid evolution of enterprise AI agent infrastructure towards secure, open, and composable platforms, alongside emerging AI cost standardization (Week 3). Week 4 saw major cloud and software vendors aligning around robust, secure agent infrastructure and cost-effective APIs. The month concluded with a clear industry pivot from high-profile consumer features to enterprise infrastructure and hardware efficiency, marked by OpenAI's shift, Meta/Alibaba's proprietary chips, and Google's TurboQuant (Week 5).