
The AI Digest: February 18, 2026
Something big has already happened
While the AI labs continue to ship model upgrades and tech giants lock in infrastructure deals, there’s a growing sense that beneath the surface, we are living through an unprecedented moment, even by AI's standards.
Plus, Silicon Valley heads to New Delhi to discuss AI.
Here’s the rundown:
1. Anthropic launches Claude Sonnet 4.6
Anthropic has released Claude Sonnet 4.6, its second major model release in under two weeks. The upgrade to its flagship Sonnet model approaches the intelligence level of Opus, Anthropic’s most advanced class of models, but is priced the same as its predecessor, Sonnet 4.5. Early users reported improvements in complex multi-step tasks, frontend coding, and financial analysis. They also saw “human-level capability” in the model’s computer usage across tasks like filling out web forms and navigating spreadsheets. Sonnet 4.6 has a 1M-token context window, large enough to hold entire codebases or lengthy contracts in a single request, and can reason effectively across that context.
Takeaway: Anthropic is shipping at breakneck speed, and moving frontier-level capabilities down the pricing curve. Sonnet 4.6 is Anthropic's second major model release in under two weeks, signaling development beyond routine iteration. Anthropic's CTO called the launch "a full upgrade across coding, agents, and knowledge work." By bringing near-Opus-level intelligence to a lower price tier, Anthropic is compressing the gap between frontier and mainstream models, making capabilities like advanced coding, reasoning, and computer usage more accessible to users.
2. Meta and NVIDIA enter long-term AI infrastructure deal
Another week, another Nvidia partnership: Meta and NVIDIA have announced a multi-year strategic partnership to advance Meta's AI infrastructure buildout. NVIDIA will supply technology for Meta's AI-optimized data centers, supporting both AI training and inference workloads at scale. The partnership includes adoption of NVIDIA's Vera Rubin platform for next-generation clusters, Spectrum-X Ethernet networking for low-latency AI-scale connectivity, and Confidential Computing for WhatsApp, enabling AI-powered features while keeping user data private. Engineering teams across both companies will co-design optimizations across CPUs, GPUs, networking, and software.
Takeaway: Hyperscalers are locking in supply before scarcity hits, effectively securing production capacity the way energy companies lock in fuel contracts. With tech titans expected to collectively shell out nearly $700 billion on AI just this year, these long-term infrastructure deals are becoming more and more critical. Tech giants want to lock in infrastructure supply, and suppliers like Nvidia want to lock in customers for the next generations of hardware.
3. OpenAI releases first model for real-time coding
OpenAI has released a research preview of GPT-5.3-Codex-Spark, an ultra-fast model built for real-time coding in Codex. Developed in partnership with chip company Cerebras, the model delivers over 1,000 tokens per second, making it fast enough for interactive, back-and-forth collaboration. Unlike OpenAI's frontier models, which are optimized for long-running autonomous tasks, Codex-Spark is designed for in-the-moment work: targeted edits, rapid iteration, and near-instant responses. The preview is available to ChatGPT Pro users via the Codex app, CLI, and VS Code extension, with API access rolling out to select design partners.
Takeaway: OpenAI is betting on Codex in the race against Claude Code. After launching the desktop app for Codex earlier this month, OpenAI shipped Codex-Spark, with rapid improvements already promised. By launching a real-time coding product, it’s backing fast iteration that optimizes for speed and developer flow —not just reasoning benchmarks. In OpenAI’s own words, in today’s interactive developer environment, “latency matters as much as intelligence.”
In other news
- OpenClaw founder Peter Steinberger joins OpenAI to build personal agents, OpenClaw to remain open source (Peter Steinberger)
- Micron plans $200B expansion to address biggest memory chip shortage in 40 years (Bloomberg)
- Apple continues push into AI hardware with smart glasses, pendants and camera AirPods (Bloomberg)
- ElevenLabs launches voice and chat agents for government organizations (ElevenLabs)
- SpaceX and xAI compete in Pentagon contest to produce voice-controlled drone swarming technology (Bloomberg)
Trending in AI
Silicon Valley headed to New Delhi for the AI Impact Summit this week. Soaring hotel prices made headlines in the lead-up to the event as the city prepared to host top tech leaders including Sundar Pichai and Sam Altman. At the summit, India projected over $200B in AI investments over the next 2 years, and committed to 20,000 GPUs to strengthen domestic AI infrastructure. AI labs made key moves during summit week. Anthropic partnered with India-headquartered Infosys, and opened its first Bengaluru office. Sam Altman revealed that ChatGPT now has 100 million weekly active users in India, making the country the platform’s second-largest market after the US.
You can stream the event here.

Live from Delhi’s AI Impact Summit!
Events to watch in AI
- Human(X) (April 6-9, San Francisco)
- Google I/O – Google’s developer conference (May 19-20, Mountain View, California)
3 new AI tools to try
- Tiny Aya, multilingual AI model – download here (Cohere Labs)
- Manus agents, now available in Telegram – download here (Manus AI)
- Qwen3.5 – download here (Alibaba)
Recommended reading
- A coming age of reason: Evolutionary innovation and the new layers of agentic software (Matt Jacobson and Murali Joshi, ICONIQ Capital)
- The cost of staying (Amy Tam, Bloomberg Beta)
- Something big is happening (Matt Shumer, OthersideAI)
See you next week!

