How Anthropic and Claude Became the Most Disruptive Force in AI
The Company That Said No to the Pentagon
In a month full of remarkable AI news, one story stood out above all others: Anthropic refused a request from the US Department of Defense to remove safety guardrails from Claude.
The DoD wanted Claude available for "all lawful purposes" — including mass surveillance and fully autonomous weapons. Anthropic said no. The DoD listed Anthropic as a supply chain risk and terminated its contract.
Meanwhile, OpenAI signed the DoD deal that Anthropic refused.
What happened next surprised everyone: users flooded Claude, boycotted ChatGPT, and Anthropic's app installations surged while ChatGPT saw record removals.
This moment — a tech company holding the line on AI ethics and being rewarded for it by users — may be one of the defining stories of AI in 2026.
The Numbers Are Staggering
Anthropic's growth in 2026 is unlike anything in recent tech history:
- Annual revenue run rate: $14 billion
- Fresh funding raised: $30 billion
- Claude Code ARR: $2.5 billion (up from $1 billion at end of 2025 — more than doubled in two months)
- Business subscription growth: 4.9% month over month in February
- Market share: nearly one in four businesses now pays for Anthropic, up from one in 25 a year ago
OpenAI still leads in overall market share (34.4% vs. 24.4%), but Anthropic is on a trajectory that has analysts predicting it could surpass OpenAI's revenue by end of 2026.
Claude Code: The Developer Revolution
The most surprising growth vector for Anthropic is Claude Code — its AI coding assistant.
Claude Code crossed $1 billion in annualised revenue by end of 2025. By February 2026, that number had more than doubled to $2.5 billion.
This is significant because it shows that Claude is not just winning in consumer AI chat — it is dominating in the highest-value professional use case: software development. Developers are choosing Claude over GitHub Copilot, Cursor, and other tools for complex coding tasks.
xAI's co-founders reportedly left the company in March 2026 after Elon Musk complained that xAI's coding tools "were not effectively competing with Claude Code."
New Features in March 2026
Anthropic has been shipping product at an incredible pace:
- 1 million token context window for Opus 4.6 — available by default for Max, Team, and Enterprise plans
- Custom charts and inline visualizations directly in Claude responses
- Cowork — a persistent agentic thread where Claude manages tasks autonomously, available on Max plans via Claude Desktop and mobile
- Plugin marketplace for Team and Enterprise plans
- MCP elicitation support in Claude Code with new hooks and workflow controls
- $100 million invested into the Claude Partner Network
- Plans for a new Sydney office (fourth Asia-Pacific location)
- Launch of The Anthropic Institute
The Security Wake-Up Call
Anthropic also revealed a troubling finding: three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — were linked to misuse of Anthropic's platform via 24,000 fraudulent accounts generating over 16 million interactions with Claude.
Anthropic characterised this "distillation" activity as a potential national security threat, warning of risks including cyberattacks and disinformation campaigns.
Why This Matters
Anthropic's rise tells a story about what the AI market actually values in 2026: safety, reliability, and trust.
Companies are no longer choosing AI tools just on benchmark performance. They are choosing them based on how responsibly the underlying company behaves — and Anthropic's principled refusal of the Pentagon contract has become a powerful signal to enterprise buyers.
Time magazine named Anthropic "the most disruptive company in the world" in March 2026. Based on the numbers and the story, it is hard to argue.