April 8, 2026

The cybersecurity moment, and the monetization era

The AI Digest: April 8, 2026

It’s an Anthropic-packed week: the Claude-maker takes on cybersecurity, and signs its largest compute deal yet. OpenAI execs diverge on IPO timing.

Plus, MCP is down, skills are in.

Here’s the rundown:

1. Anthropic launches Project Glasswing, gives tech giants access to its unreleased model for cyber defense

Anthropic announced Project Glasswing, an initiative that gives a select group of companies access to Claude Mythos Preview, its unreleased frontier model, for defensive security work. Launch partners include AWS, Apple, Microsoft, Google, Cisco, CrowdStrike, JPMorganChase, Nvidia, and Palo Alto Networks. Anthropic’s thesis: AI models have reached a level of coding capability “where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” Anthropic says Mythos Preview has already autonomously identified thousands of zero-day vulnerabilities, flaws previously unknown to vendors, across all major operating systems and browsers. That includes a 16-year-old bug in FFmpeg, an open-source software project to encode and decode video, that survived five million automated tests.

Why this matters: Frontier models are now so adept at finding software vulnerabilities that access to them has become a critical security concern. That may be why Anthropic is sharing Mythos Preview with a small group of cybersecurity-centric companies. Palo Alto Networks CEO Nikesh Arora framed it this way: “the barrier to entry for sophisticated cyberattacks drops to near zero, while the speed of those attacks accelerates.” His solution: put the same models on the defense.

2. OpenAI leadership diverges on IPO timing as spending commitments and Anthropic rivalry loom

OpenAI CEO Sam Altman and CFO Sarah Friar can’t agree on when to IPO, The Information reports. Altman wants to go public as soon as Q4 of this year – before Anthropic, which is said to be targeting its IPO in the same window. But Friar has told colleagues she doesn't think OpenAI will be ready, citing procedural requirements and risks from its spending commitments. That includes $600 billion in server spending over five years and expectations that the company will burn more than $200 billion before generating cash. Despite the internal differences, OpenAI is laying IPO groundwork, tapping law firms and holding informal conversations with bankers at Goldman Sachs and Morgan Stanley, per The Information.

Why this matters: For OpenAI, going public is a strategic trade of control for scale. It would give the company a fresh capital engine to fund its long-horizon (very pricey) compute buildout. The IPO drive is clear; the internal disagreement is the interesting part: Altman’s push to list sooner reads as a bid to lock in momentum and narrative leadership ahead of the next wave of frontier competition. Friar’s caution reflects the risk of entering public markets (and being subject to quarterly financial scrutiny) while in peak-spend mode.

3. Anthropic signs its largest-ever compute deal, with Google and Broadcom

Anthropic signed its largest compute commitment ever with Google and Broadcom. For Anthropic, the deal secures multiple gigawatts of next-generation TPU capacity, expected to come online in 2027. The company cited accelerating Claude demand and said that its run-rate revenue has surpassed $30 billion. Claude remains the only frontier model available on all three major clouds: AWS Bedrock, Google Vertex AI, and Microsoft Azure Foundry.

Why this matters: Anthropic’s compute deal is ultimately a revenue story. Reserving next-generation capacity ensures supply keeps pace with enterprise demand for Claude. It also helps Anthropic avoid the margin hit of surge pricing later. The deal reflects Anthropic's multi-hardware strategy: it trains across AWS Trainium, Google TPUs, and Nvidia GPUs to match workloads to the best-suited chips. Combined with Claude's presence on all three major clouds, this gives Anthropic two levers: it captures enterprise demand wherever it lives, without fighting procurement inertia, and it avoids margin compression by routing compute to the most cost-efficient hardware available.

Trending: MCP, demystified (and slightly deflated)

Anthropic's Model Context Protocol (MCP) has been hailed as the connective tissue for AI agents.

The latest piece from AI infrastructure startup Bem argues that the concept behind MCP isn’t new: systems describe what they can do in a structured way, and other programs invoke those capabilities. This has been the APIs-and-CLIs playbook for years.

So why the hype around MCP? The consumer is now an agent, not a human. Tool descriptions used to be written for humans to interpret and wire up. Now they’re written for models that can understand a schema and use it. And MCP packages the job well for agent tool-use.

Nonetheless, the piece concludes that even if MCP becomes the default, it doesn’t address the variables that decide adoption. Think: security, governance, audit trails, and reliability. MCP can standardize the interface, but trust still has to be built into the product.

Bonus: The skill takeover

Right on cue, the counter-trend to MCP hype is here: skills.

In one builder's framing, a skill is the unit that ships: it combines code and prompts into a single file in your repo, and is version-controlled and ready to be deployed alongside the app.

The argument against MCP: it treats AI integration as an API problem. As a result, you get what the server author decided to expose. Skills, on the other hand, teach an agent how to use a capability — what to call, when to call it, how to interpret results, and what failure modes to avoid. Here, the agent decides dynamically.

And the game-changing function: AI can write skills. The same model that uses a skill can also draft, debug, and improve it.

In other news

  1. Anthropic plans $200M investment in new PE venture (The Wall Street Journal)
  2. Meta’s Superintelligence team set to launch first AI models, with plans to open source some versions (Axios)
  3. Iran names $30B Stargate data center in Abu Dhabi as a target in new statement (Tom's Hardware)
  4. Anthropic ends Claude subscription coverage for third-party tools like OpenClaw (Boris Cherny, Anthropic)
  5. Anthropic acquires AI biotech startup Coefficient Bio for about $400M (The Information)

4 new AI tools to try

  1. AI edge eloquent, offline-first dictation app – download here (Google)
  2. MAI-Image-2, image generation model with enhanced photorealism – try here (Microsoft)
  3. Cursor 3, interface to run agents in parallel across repos – download here (Cursor)
  4. ChatGPT voice mode on CarPlay, access to ChatGPT’s voice mode in supported vehicles – get started here (OpenAI)
  1. How I built a chief of staff on OpenClaw that's better than any human I've hired (Ryan Sarver)
  2. 532 years (Eric Glyman)
  3. Emotion concepts and their function in a Large Language Model (Anthropic)

See you next week!

Get The AI Digest delivered straight to your inbox each week.
Unsubscribe anytime.
Gayatri SabharwalContent Marketing
Gayatri covers the latest trends shaping finance and AI to help businesses move faster and work smarter. A New Delhi native, she previously worked in policy and strategy at the World Bank and UN Women.
Ramp is dedicated to helping businesses of all sizes make informed decisions. We adhere to strict editorial guidelines to ensure that our content meets and maintains our high standards.