- 1. Claude Code balances autonomy and guardrails with new auto mode
- 2. OpenAI kills Sora, ends $1B Disney deal
- 3. OpenAI ups the ante on private equity partnerships as Anthropic rivalry intensifies
- In other news
- Trending: Rogue agents hit Sev 1 classification
- Bonus: The co-founder accused of smuggling $2.5B worth of Nvidia tech
- 3 new AI tools to try
- Recommended reading
The AI bargain: Guardrails on, Sora off, terms up
The AI Digest: March 26, 2026
Agentic autonomy versus control, a pivot away from genAI video, and a fight for enterprise distribution among the AI labs.
1. Claude Code balances autonomy and guardrails with new auto mode
Anthropic introduced auto mode for Claude Code, a setting that allows the tool to make permission decisions on a user’s behalf, with guardrails. Before each action runs, a built-in classifier checks for and blocks potentially destructive moves such as bulk file deletion or malicious code execution, while letting lower-risk actions proceed without interruption. This lets users run longer tasks without frequent human approval, while keeping risk in check. The feature sits between Claude Code's default, which prompts users to approve every file write and command, and its riskiest setting (YOLO mode), which entirely skips permission checks. Auto mode is available in research preview for Team plan users, with Enterprise and API access to follow soon.
Why this matters: The line between fully autonomous agents and human-in-the-loop systems is delicate, and Anthropic just managed it well. Instead of forcing a binary choice between safety and speed with users, Anthropic offered a middle ground: granting its agents conditional autonomy. This is especially relevant in coding, with incidents like Amazon’s AI-related outages exposing the risks of weak oversight. If this approach holds up, it could become the default for deploying agents in high-stakes workflows such as finance and healthcare.
2. OpenAI kills Sora, ends $1B Disney deal
OpenAI shut down Sora, the AI video generation tool it introduced in 2024. According to Reuters, the decision reflects the AI lab’s prioritization of its coding products, enterprise clients, and AGI ambitions. Reuters also reported that the decision was influenced by the computational costs of running Sora. The app’s shutdown ended OpenAI’s $1 billion, three-year licensing deal with Disney, announced three months ago. The deal allowed users to generate Disney characters in Sora videos.
Why this matters: The Sora shutdown is a signal that genAI video might be too costly to keep on the priority list, at least for now. The decision also underscores how quickly frontier labs can reallocate resources toward categories with clearer ROI (coding) and stickier distribution (enterprise), even if it means walking away from splashy consumer narratives. This changes the calculus for would-be partners: if a recently announced strategic agreement can be unwound this fast, big brands may push for tighter protections (milestones, exit clauses, portability of work, or multi-vendor strategies) rather than build roadmaps around a single model provider.
3. OpenAI ups the ante on private equity partnerships as Anthropic rivalry intensifies
OpenAI is offering private equity firms a guaranteed minimum return of 17.5% on preferred equity stakes to join its joint venture, a rate significantly higher than typical for preferred instruments, according to Reuters. The offer includes early access to its newest models, seniority over other joint venture partners, and downside protection. The terms are more favorable than Anthropic's comparable offering, which includes no guaranteed returns, per Reuters. As we covered last week, both the AI labs are courting buyout firms as a distribution channel to roll out enterprise AI tools across portfolio companies at scale. At least two PE firms have decided not to join either JV, citing concerns about the economics and long-term profit profile of the partnerships.
Why this matters: In its enterprise phase, the AI “platform war” isn’t about competing on model quality, but on distribution. By dangling a guaranteed return plus downside protection, OpenAI is effectively subsidizing PE firms to standardize its tools across portfolio companies, turning buyout shops into a repeatable go-to-market channel. The story also hints at two underlying realities. First, that large-scale enterprise deployment is still complex enough that labs are using financial engineering (joint ventures, preferred terms) to grease the wheels of adoption. Second, the fact that some firms are passing up the offers suggests the economics aren’t yet obvious. This raises the bar of proof enterprises need to clear to justify large-scale AI rollouts.
In other news
- Meta revives stock options for the first time since IPO to retain AI talent (Bloomberg)
- SpaceX might file for IPO within the next two weeks (The Information)
- Apple’s AI revenue set to surpass $1B this year, reflecting its device dominance (The Wall Street Journal)
- Sephora launches app inside ChatGPT, bringing recommendations and shopping capabilities into the chat (Sephora)
- OpenAI commits $7.5M to fund independent research for AI frontier alignment and safety (OpenAI)
Trending: Rogue agents hit Sev 1 classification
A Meta employee used an in-house AI agent to analyze a technical question on an internal discussion forum. The agent went a step further than intended, posting its own response without the user’s approval. Another employee followed that AI-generated advice.
The aftermath? Systems containing significant amounts of company and user-related data were accessible to engineers who weren’t authorized to see it. Meta classified the security incident as Sev 1 (second-highest severity internally) and said there’s no evidence the temporary access was abused or that user data was mishandled, per The Information.
In an agent’s world, there’s an ongoing battle between machine autonomy and human control. And the industry is responding: Anthropic’s auto mode for Claude Code, and investors backing cybersecurity startups. But as agents proliferate, the rules governing them may need to get more granular. As an investor at Scale Venture Partners put it, “Agents demand a different permissions stack. An agent scoped to read a forum should literally not be able to write to it.”
Bonus: The co-founder accused of smuggling $2.5B worth of Nvidia tech
Prosecutors charged Supermicro cofounder Yih-Shyan “Wally” Liaw with routing Nvidia-powered servers through Southeast Asia to get restricted chips into China. While Supermicro said it’s not named as a defendant in the indictment, in 2006 the company pleaded guilty to a similar scheme involving Iran, according to Fortune. The two cases, separated by decades, point to a persistent gap between export control policy and enforcement, one that the global scramble for frontier AI hardware is making harder to close.
3 new AI tools to try
- NemoClaw, open-source guardrails for OpenClaw – try here (Nvidia)
- Dynamic workers, sandbox for running agent-written code – try here (Cloudfare)
- Figma MCP server, allows AI agents create and update designs in Figma – get started here (Figma)
Recommended reading
- The Anthropic Economic Index report: Learning curves (Anthropic)
- How we made Ramp Sheets self-maintaining (Ramp)
- Vibe physics: The AI grad student (Anthropic)
- Building Effective AI Coding Agents for the Terminal: Scaffolding, Harness, Context Engineering, and Lessons Learned (Nghi D. Q. Bui, OpenDev)
See you next week!

