Why Anthropic Now Captures 70% of New Enterprise AI Spending — And What It Means
Why Anthropic Now Captures 70% of New Enterprise AI Spending — And What It Means
For most of 2023 and 2024, OpenAI and ChatGPT dominated the AI conversation. The brand name was synonymous with AI. "ChatGPT it" became a verb.
But quietly, in enterprise procurement departments, something different was happening.
New data from Ramp — the corporate card and spend management platform that processes billions in business spending — reveals a striking shift: Anthropic now captures approximately 70% of new enterprise AI spending among businesses that are newly adopting AI tools.
OpenAI, despite being the household name, is being outchosen in the market that matters most for revenue: business customers.
How did this happen? What does it mean? And what should your business do with this information?
The Data Behind the Headline
Ramp has unique visibility into business spending because companies use its platform to pay for software subscriptions. When a company starts paying for an AI tool, Ramp sees it.
The 70% figure refers specifically to new enterprise customers — businesses that had not previously subscribed to an AI platform and are now choosing one. Among these new adopters, 70 cents of every AI dollar goes to Anthropic.
This is different from total market share, where OpenAI still leads due to its head start and consumer base. But in the enterprise segment — where contract values are higher, decisions are more deliberate, and switching costs are significant — Anthropic is winning.
Why Claude Is Winning Enterprise
Safety and Reliability
Enterprise customers have different requirements than individual users. A developer experimenting with AI can tolerate occasional hallucinations or inappropriate outputs. A bank, hospital, or law firm cannot.
Anthropic's Constitutional AI approach — training Claude with explicit safety guidelines — produces a model that enterprise risk and compliance teams find more acceptable. Claude is less likely to produce problematic outputs in high-stakes professional contexts.
Better Long-Context Performance
Enterprise use cases often involve long documents: legal contracts, financial reports, technical documentation, research papers. Claude's performance with long context — up to 200,000 tokens (approximately 150,000 words) — is consistently rated higher than GPT models for document-length tasks.
A law firm processing 200-page contracts, a financial analyst reviewing annual reports, an engineer reviewing codebases — these are Claude use cases where the quality difference is material.
The Coding Advantage
Claude Code's $2.5 billion ARR versus Codex's $1 billion tells a clear story: professional developers prefer Claude for coding tasks. Software development is one of the highest-volume enterprise AI use cases, and Claude's dominance here drives significant overall enterprise spending.
Microsoft's Surprising Endorsement
Microsoft, which has a multi-billion dollar investment in OpenAI, launched Copilot Cowork — a new enterprise AI agent — built partly using Anthropic's Claude. The fact that OpenAI's largest partner is choosing Claude for specific enterprise features is a significant signal about comparative capability.
The Trust Dimension
Anthropic's Public Benefit Corporation structure and explicit safety mission resonate with enterprise legal, ethics, and corporate governance teams in ways that OpenAI's rapid commercialization has not always achieved.
When enterprise procurement goes through legal review — as it always does at scale — Anthropic's track record and transparency make approval easier.
OpenAI's Enterprise Response
OpenAI is not standing still. The Astral acquisition, the $25 billion ARR milestone, and the upcoming IPO all indicate a company in aggressive growth mode.
ChatGPT Enterprise has been improving with:
- Enhanced admin controls and security features
- Custom GPTs for organizational use cases
- Better integration with Microsoft 365
- Skills (reusable workflows) for team use
But the gap in enterprise trust is hard to close with feature parity alone. Enterprise customers switch slowly and switch back even more slowly.
The Gemini Enterprise Factor
Google is aggressively pushing Gemini into enterprise through Google Workspace integration:
- Gemini in Docs, Sheets, Slides, Gmail, Meet
- Google Cloud enterprise contracts bundling Gemini access
- Vertex AI for custom model deployment
For companies already deep in Google Workspace (most Indian mid-market and enterprise companies), Gemini offers compelling value through existing relationships. Google's enterprise sales machine is formidable.
The three-way enterprise AI race — Anthropic/Claude, OpenAI/ChatGPT, Google/Gemini — is where the largest dollars are being spent, and the outcome is not yet certain.
What This Means for Indian Businesses
Choosing Your Enterprise AI Stack
The Ramp data provides clear guidance: if you are selecting an AI platform for your organization and do not have specific reasons to choose otherwise, the enterprise market's collective judgment favors Claude.
When to choose Claude/Anthropic:
- Legal, compliance, healthcare, or financial use cases where reliability matters most
- Heavy document analysis and long-context work
- Developer teams (Claude Code advantage)
- Organizations where AI safety and corporate responsibility are board-level concerns
When to choose ChatGPT/OpenAI:
- Consumer-facing AI features (brand recognition, user familiarity)
- Tight Microsoft/Azure integration
- Broad API ecosystem with the most third-party integrations
- Coding assistant preference for GPT-based tools
When to choose Gemini/Google:
- Google Workspace-first organizations
- Google Cloud infrastructure users
- Need for deep multilingual support including Indian languages
- Bundled pricing through existing Google contracts
The Multi-Model Reality
The smartest enterprise AI strategy in 2026 is not picking one winner. It is using the right model for the right task:
- Claude for document analysis and coding
- Gemini for Google Workspace tasks and Indian language needs
- GPT-4o for consumer-facing features and broad ecosystem integration
- Perplexity for research workflows requiring current information
Model routing infrastructure (like OpenRouter) makes this multi-model approach increasingly practical.
The Bigger Picture: AI Commoditization
The enterprise market shift to Claude does not necessarily mean Claude is dramatically better than GPT-4. The frontier models have converged significantly on capability benchmarks.
What Anthropic has done is win on trust, safety, and specialization for enterprise use cases — factors that matter more to enterprise buyers than raw benchmark scores.
As models continue to converge on capability, these factors will determine the market winners. Anthropic is betting that its safety-first positioning becomes a sustainable moat as AI becomes critical infrastructure.
So far, the data suggests it is right.
Making smart AI decisions for your business? Brandomize helps Indian businesses navigate the AI tool landscape and implement the right stack for their specific needs.