- 1. Claude Code can now control your Mac
- 2. Anthropic accidentally leaks Claude Code's source code
- 3. OpenAI's ad business hits $100M ARR in under two months
- Trending: SpaceX’s trillion-dollar IPO is on the horizon
- Bonus: Working hard or hardly working? A(I)’m just here to help
- In other news
- 3 new AI tools to try
- Recommended reading

The agent is in control, but who controls the agent?
The AI Digest: April 1, 2026
Agents control your desktop, the security focus moves from avoiding bad answers to avoiding bad actions, and Anthropic inadvertently leaks Claude Code’s source code.
1. Claude Code can now control your Mac
Anthropic launched computer use for Claude Code, allowing the CLI to open apps, click, type, and take screenshots, without leaving the terminal. The feature is designed for tasks that require direct screen interaction, like testing a mobile app flow, reproducing a visual bug, or automating a tool with no API access. It's available as a research preview on macOS for Pro and Max subscribers.
Why this matters: This launch is part of a broader industry push to build “computer-use” agents, models that can directly navigate your machine and operate software on it (consider Perplexity computer and OpenAI’s native computer-use in GPT-5.4). The trend raises the stakes for security: now, the big risk isn’t a bad answer — it’s a bad action. Computer-use models might warrant tighter permissions (restricting agent access to certain apps), clear action limits (read vs write vs execute), and audit logs (records of what the agent did, when, and why). The importance of such controls increases as agents integrate with workplace tools like Slack, Notion, and Gmail, expanding both their access to data and the range of actions they can take.
2. Anthropic accidentally leaks Claude Code's source code
Anthropic mistakenly published a large debugging source map file in an npm release, exposing internal Claude Code source code, according to VentureBeat. Developers analyzed the TypeScript codebase, mirrored across GitHub, within hours of the release. In a statement to VentureBeat, Anthropic attributed the mistake to human error rather than to a security breach, and said no customer data or credentials were involved. The leak may have given competitors an unusually detailed look at how Claude Code works, including its memory architecture and autonomous agent logic.
Why this matters: Agentic security is about disciplined DevOps (packaging, configs, and release hygiene) as much as it is about model behavior. In last week’s Meta Sev 1 incident, an agent took an action it shouldn’t have. This reinforced the importance of guardrails like least-privilege access and read vs write controls. This week, Anthropic’s Claude Code story is the flip side: a human release mistake exposed some of the code that enforces those guardrails. The reminder? The safety bar for operator agents includes how reliably we ship and maintain the guardrails that contain them.
3. OpenAI's ad business hits $100M ARR in under two months
OpenAI crossed $100 million in annualized ad revenue less than two months after launching its ChatGPT ad pilot in the U.S., CNBC reported. The AI lab said that it is now working with more than 600 advertisers and has seen no impact on privacy-related trust metrics. To double down on its ads push, OpenAI has brought on Dave Dugan, former VP of global clients and agencies at Meta, as VP of global ad solutions, the Wall Street Journal reported. About 85% of free and ChatGPT Go users in the U.S. are eligible to see ads, though fewer than 20% see them on any given day, an OpenAI spokesperson told CNBC.
Why this matters: The “ads in ChatGPT” experiment is turning into a notable revenue stream, even without OpenAI showing ads to all users concurrently. Hitting a $100M ARR runrate in two months with <20% daily exposure suggests that advertisers believe ChatGPT is a valuable surface for marketing (e.g. users have high intent when researching specific topics). This strengthens the case for the hybrid model we flagged earlier: ad-supported access at the low end, enterprise subscriptions at the high end. The Dave Dugan hire is also telling: OpenAI is doubling down on building an ad sales platform. Advertisers will need to rethink measurement, as conversions increasingly happen within ChatGPT and not on external landing pages.
Trending: SpaceX’s trillion-dollar IPO is on the horizon
SpaceX is preparing for its mega-IPO, reportedly targeting a $75 billion raise at a $1.75 trillion valuation, with up to 30% of shares reserved for individual investors (about 3x the typical retail allocation).
The Information points to the structural pressure: even fund managers who think SpaceX is wildly overpriced may feel they can’t sit it out because benchmark pressure is asymmetric (if it rips and you missed it, you look uniquely wrong; if it drops and everyone bought, you’re wrong with the crowd). Heightening the FOMO, Nasdaq just approved a rule change that could allow big companies to enter its indexes as soon as 15 days after listing, even if the stock is thinly traded, per The Information.
Now layer in the AI angle: in February, SpaceX acquired Musk’s AI lab xAI, which Axios says is burning significant capital. This IPO could thus become the first major public-market reality check on whether investors will tolerate frontier-model economics (GPU burn now, “trust Elon” later) when the valuation is already trillion-plus.
Bonus: Working hard or hardly working? A(I)’m just here to help
AI is speeding people up, but not yet freeing up their days. A viral X post framed this trend as “workload creep”: you save time on a task with AI, and that time gets immediately reoccupied. The lingering question: as AI expands what’s possible, does the bar for good work rise and warrant more work?
The data tells an interesting story. An NBER working paper based on a survey of nearly 750 corporate executives highlights a “productivity paradox,” where perceived productivity gains from AI exceed measured productivity gains, often because revenue effects take time to materialize. The paper finds that productivity gains from AI are positive, though uneven across sectors, with the largest gains concentrated in high-skill services and finance. These effects are expected to strengthen in 2026.
That gap between workers’ felt sense of increased productivity and the hard numbers to back it might be where workload creep lives. As we wait for productivity gains to show up in revenue lines, our instinct might be to reinvest our saved time.
In other news
- OpenAI closes latest funding round with $122B at $852B post-money valuation (OpenAI)
- France’s Mistral AI raises $830 million in debt to fund AI data center near Paris (Reuters)
- Meta to launch Ray-Ban smart glasses models for prescription wearers (Bloomberg)
- AI startups are increasing base pay and offering more liquid equity to attract top talent (The Wall Street Journal)
- Ramp launches stablecoin accounts in public beta (Ramp)
3 new AI tools to try
- Qwen3.5-Omni, fully omnimodal LLM, with understanding of text, images, audio, and audio-visual content – try here (Alibaba)
- Critique and Council, multi-model deep research system that combines AI models for best-in-class research quality – available in Microsoft 365 Copilot's Researcher (Microsoft)
- Transcribe, automatic speech recognition model – download here (Cohere)
Recommended reading
- Sycophantic AI decreases prosocial intentions and promotes dependence (Myra Cheng, Cinoo Lee, Pranav Khadpe, Sunny Yu, Dyllan Han, and Dan Jurafsky)
- Hyperagents (Jenny Zhang, Bingchen Zhao, Wannan Yang, Jakob Foerster, Jeff Clune, Minqi Jiang, Sam Devlin, Tatiana Shavrina)
- Claudini: Autoresearch Discovers State-of-the-Art Adversarial Attack Algorithms for LLMs (Alexander Panfilov, Peter Romov, Igor Shilov, Yves-Alexandre de Montjoye, Jonas Geiping, Maksym Andriushchenko)
See you next week!
