The AI Digest: March 4, 2026
The math is mathing
One of the largest private funding rounds in history, two AI giants go back and forth with the Pentagon, and a fintech company cites AI as a reason for its mass layoffs.
Plus, will AI take my job?
Here’s the rundown:
1. OpenAI raises $110B at $730B valuation
In one of the largest private funding rounds ever, OpenAI said it raised $110 billion at a $730 billion pre-money valuation, with large investments from SoftBank ($30B), Nvidia ($30B), and Amazon ($50B). Along with the raise, the company announced a multi-year partnership with Amazon: as part of this, AWS will be the exclusive third-party distributor of OpenAI's enterprise platform Frontier and will expand its AWS infrastructure agreement by $100 billion over eight years. OpenAI also expanded its infrastructure deal with Nvidia, securing additional compute capacity from the chip giant. The raise comes amid surging usage of OpenAI products: the company said ChatGPT has more than 900 million weekly users, with January and February on track to be the biggest months for new subscribers, and Codex weekly users have tripled since January. OpenAI said the historic investment will help support growing demand as well as its AGI ambitions.
Takeaway: The math is mathing: a $110 billion raise is staggering, but look closer at who's writing the checks, and it makes more sense. As The New York Times noted, the round illustrates the circular dealmaking at the center of the AI boom: tech titans like Nvidia, Amazon, and Microsoft have invested huge sums into AI labs like OpenAI, and those AI labs have purchased cloud computing power and hardware from their investors. Much of that capital is flowing right back to the companies who just handed it over. Nonetheless, the size of this round continues to speak to Altman’s ambitions to build a multi-trillion-dollar company.
2. OpenAI reaches AI deal with Pentagon, Anthropic holds out
Anthropic and OpenAI have taken divergent paths on deploying AI for the U.S. military. Anthropic, which was the first frontier AI company to deploy models in classified government networks, declined to sign a Pentagon contract that would have required it to allow any lawful use of its models by the military. Specifically, Anthropic declined to drop guardrails against mass domestic surveillance and fully autonomous weapons. CEO Dario Amodei acknowledged in his letter that the Department of War could invoke the Defense Production Act to force the guardrails' removal. When Anthropic held firm, President Trump directed all federal agencies to halt use of Anthropic's products, with a six-month phase-out period, and Defense Secretary Hegseth designated the company a supply chain risk.
OpenAI subsequently reached its own agreement with the Pentagon, saying its contract offers more guardrails than any previous classified AI deployment, including Anthropic’s. OpenAI’s contract includes three red lines — no mass domestic surveillance, no autonomous weapons, and no automated high-stakes decision-making — enforced through a cloud-only deployment, contractual protections, and full discretion over the safety stack. OpenAI asked the government to make the same terms available to other labs, including Anthropic.
Takeaway: Anthropic and OpenAI’s different approaches may come down to values vs. pragmatism. As MIT Technology Review put it, OpenAI pursued a legal and pragmatic path that kept it at the table, while Anthropic pursued a moral approach that won it many supporters. Big tech rallied behind Amodei's letter, and Claude shot to the top of the App Store, with demand so high that the app had a brief outage. The longer-term question is whether public trust will drive durable advantage, or if the ultimate winner will be the company whose guardrails hold up when their models are deployed widely in classified, high-stakes settings.
3. Block lays off nearly half its workforce citing AI-driven shift
Block co-founder and CEO Jack Dorsey announced that the fintech company is slashing its workforce from 10,000+ employees to under 6,000. Impacted workers are either being laid off or asked to enter into consultation. Dorsey said the reduction is not a sign of financial distress, noting that Block’s gross profit continues to grow. Instead, he framed the decision as a proactive response to an AI-driven shift in how companies operate with leaner teams. Dorsey said he chose a single mass reduction now to avoid repeated rounds of layoffs in the future as this shift plays out. Affected employees will receive 20 weeks of salary plus one week per year of tenure, equity vested through the end of May, six months of healthcare, their corporate devices, and a $5,000 transition stipend.
Takeaway: Layoffs are increasingly being viewed through the lens of AI job displacement, but how companies communicate this framing is beginning to matter more. While the immediate reaction to Block’s move was “AI is coming for jobs,” the conversation eventually splintered: some called Dorsey’s rationale “AI washing,” arguing that Block is using AI as a cover to correct pandemic-era overhiring. Others focused on execution, praising Dorsey’s clarity of communication and the generosity of the severance package. Whatever the true driver, Block's announcement isn’t the first layoff decision linked to AI. But it’s a reminder that company messaging — with AI positioned as a tool that enhances work, a coworker that supplements it, or a force that displaces it — may matter as much as the decisions themselves.
In other news
- Claude Code adds voice mode for hands-free coding (TechCrunch)
- After Virginia power line malfunction, grid operators warn of data centers unplugging simultaneously (Wall Street Journal)
- AI boosts workplace surveillance tools for more granular tracking of employee productivity (The New York Times)
- OpenAI Codex and design platform Figma launch two-way code-to-design integration (OpenAI)
Trending in AI
There’s a dreaded question on a lot of people’s minds today: will AI take my job?
It’s not just the Block news. X was buzzing over a16z co-founder Marc Andreessen’s response. To Andreessen, the AI job-stealing panic is “totally off base.” He frames AI less as a replacement force and more as a performance unlock to counter demographic collapse and declining productivity. A more policy-oriented view is also doing the rounds: in The New Yorker, John Cassidy summarized a “pro-worker AI” agenda from economists Daron Acemoglu, Simon Johnson, and David Autor. It emphasizes human agency and highlights the question: how can we steer AI toward job enhancement rather than displacement?
Also making waves: YC co-founder Paul Graham teased Replit’s next rollout, saying it will redefine vibe coding.
5 new AI tools to try
- Nano Banana 2, image generator with Gemini Flash capabilities – try here (Google DeepMind)
- Perplexity computer, unified, general-purpose AI system – try here (Perplexity)
- Flow, filmmaking tool – try here for free (Google Labs)
- Hermes agent, open source personal agent with persistent memory – get started here (Nous Research)
- Opal 2.0, interactive app builder – try here (Google Labs)
Recommended reading
- Introducing Claude's Corner: Why Anthropic is giving Claude Opus 3 its own Substack (Claude Opus 3, Anthropic)
- Disrupting malicious use of our models: An update, February 2026 (OpenAI)
- The third era of AI software development (Michael Truell, Cursor)
- Codified context: Infrastructure for AI agents in a complex codebase (Aristidis Vasilopoulos)
- Think deep, not just long: Measuring LLM reasoning effort via deep-thinking tokens (Wei-Lin Chen, Liqian Peng, Tian Tan, Chao Zhao, Blake JianHang Chen, Ziqian Lin, Alec Go, Yu Meng)
Bonus: The crabwalk of consequences
An operator’s account of what went wrong when they ran an OpenClaw agent on GitHub
Rathbun’s operator (MJ Rathbun)
See you next week!


