AI Weekly: The $660 Billion Gamble and Anthropic's Field Day

This week in AI, Big Tech announced a record $660 billion infrastructure spend while Anthropic raised $30 billion and faced controversy over military use in Venezuela. Meanwhile, a new video generator rattled Hollywood, and Oxford researchers issued warnings about using AI for medical diagnoses.

AI Weekly: The $660 Billion Gamble and Anthropic's Field Day
Audio Article

Welcome to your weekly AI news roundup for the week of February 8th to February 15th, 2026. It has been a week defined by staggering financial figures, deep controversies in defense, and significant shifts in the entertainment landscape. From Silicon Valley’s historic spending spree to the battlefield in Venezuela, artificial intelligence is reshaping our world faster than ever.

Leading the headlines this week is the eye-watering financial commitment from Big Tech. In a coordinated signal to the market, Google, Amazon, Meta, and Microsoft announced a combined capital expenditure plan of $660 billion for AI infrastructure in 2026 alone. This unprecedented investment—aimed at fortifying data centers and next-generation compute capabilities—triggered a paradoxical market reaction. Investors, seemingly spooked by the sheer scale of the spending and the pressure for returns, sent the combined market capitalization of these giants tumbling by nearly $1 trillion earlier in the week. The message is clear: the infrastructure build-out is entering a hyper-aggressive phase, but Wall Street is getting nervous about the bill.

While the giants wrestled with stock prices, Anthropic emerged as the central protagonist of the week. The company secured a fresh $30 billion funding round, catapulting its valuation to a massive $380 billion. Anthropic also made a splash in the sports world, announcing a multi-year partnership with the Atlassian Williams Formula 1 team. Under the deal, the Claude model will serve as the team's "Official Thinking Partner," assisting with strategy and engineering decisions.

However, Anthropic's week wasn't all positive. A report from the Wall Street Journal, cited by The Guardian on February 14th, alleged that the US military utilized Anthropic's Claude model during a raid in Venezuela. The operation, reportedly facilitated through a partnership with Palantir Technologies, has ignited a fierce debate about the ethical guardrails of large language models in combat zones, challenging the company's "Constitutional AI" safety branding.

In the creative sector, the release of "Seedance 2.0," a new AI video generation tool, has sent shockwaves through Hollywood. Released on February 13th, the tool's hyper-realistic capabilities have renewed fears of displacement among visual effects artists and actors, with industry insiders describing the mood as "spooked." This coincides with broader economic anxieties highlighted by Bank of England Governor Andrew Bailey, who spoke at the AlUla Conference on Friday. Bailey warned that AI is beginning to aggressively reallocate jobs between sectors, a prediction seemingly validated as logistics stocks plunged following the launch of a new autonomous AI freight tool this week.

On the consumer front, Apple is reportedly preparing to open its CarPlay ecosystem to third-party AI rivals. Leaks from earlier in the week suggest that drivers will soon be able to integrate ChatGPT, Claude, and Gemini directly into their dashboards, moving beyond Siri for complex, hands-free reasoning tasks while driving.

Finally, a stark warning came from the medical community. Researchers at the University of Oxford published findings on February 13th labeling the use of AI chatbots for medical diagnosis as "dangerous." The study found that even top-tier models like GPT-4o and Command R+ struggled with reliability in health-related queries, urging caution as patients increasingly turn to AI for medical advice.

As we close out this volatile week, the tension between massive investment, military application, and economic disruption continues to mount. We will be watching closely to see how the market—and the regulators—respond in the days ahead.

Backgrounder Notes

Here are backgrounders and definitions for key concepts and entities mentioned in the article to assist with reader comprehension:

Capital Expenditure (CapEx) In the context of technology, this refers to the funds used by companies to acquire, upgrade, and maintain physical assets such as property, buildings, or equipment. For AI specifically, this largely involves the multi-billion dollar construction of data centers and the purchasing of specialized silicon chips (GPUs) required to train and run models.

Compute A shorthand industry term for "computational power" or "computational resources," representing the processing speed and volume required to perform complex calculations. In AI, "compute" is considered a raw commodity (similar to oil or electricity) necessary for training large language models.

Anthropic An American artificial intelligence safety and research startup founded in 2021 by former members of OpenAI. The company positions itself as a safety-focused competitor to OpenAI and Google, emphasizing "steerable" and reliable AI systems.

Claude The family of Large Language Models (LLMs) developed by Anthropic. It is the direct competitor to OpenAI’s GPT series and Google’s Gemini, and is often marketed on its ability to handle larger amounts of text (context windows) and its safety-oriented programming.

Palantir Technologies A software company that specializes in big data analytics, known particularly for its close ties to intelligence agencies and the US Department of Defense. Their platforms integrate massive datasets to help organizations and militaries make complex operational decisions.

Constitutional AI A training method developed by Anthropic designed to align AI behavior with human values by giving the model a written set of principles (a "constitution") to follow. This approach contrasts with the standard method of "Reinforcement Learning from Human Feedback" (RLHF), intending to make the AI safer and more predictable without relying solely on human contractors to rate responses.

Generative Video (Text-to-Video) A branch of artificial intelligence where models generate video clips based on written text prompts or static images. Tools in this category (like the fictional "Seedance 2.0" mentioned in the text) analyze vast amounts of video data to learn how objects move and interact, creating realistic simulations that can mimic traditional filmmaking.

Gemini Google’s family of multimodal AI models designed to understand and operate across different types of information, including text, code, audio, image, and video. It is Google's primary answer to the ChatGPT ecosystem.

GPT-4o A flagship "omni" model developed by OpenAI, designed to process and generate text, audio, and images in real-time. It represents a high-water mark for consumer-grade AI reasoning and interaction capabilities.

Command R+ An open-weights large language model developed by the enterprise AI company Cohere. It is specifically optimized for "Retrieval Augmented Generation" (RAG)—a technique used to retrieve data from external sources—making it highly popular for business and technical tasks rather than creative writing.

Link copied to clipboard!