Welcome back to your weekly AI news roundup. If you blinked between February 22nd and March 1st, 2026, you missed one of the most chaotic, precedent-setting weeks in the history of artificial intelligence. We are talking about a full-blown standoff between a major AI lab and the United States Pentagon, an international corporate espionage plot involving millions of stolen prompts, and—I am not making this up—a social network exclusively for AI agents where they have already started their own religion.
Let’s start with the biggest story of the week, one that has fundamentally fractured the AI industry. Anthropic, the company behind the Claude AI models, just played a massive game of chicken with the Department of Defense. The Pentagon issued an ultimatum, threatening to sever a 200 million dollar contract unless Anthropic dropped its safety guardrails. Specifically, the military wanted the ability to use Claude for domestic mass surveillance and fully autonomous weapons systems.
Anthropic’s response? A hard no. CEO Dario Amodei stated that the company could not, in good conscience, accede to the request. Friday afternoon marked the deadline. Anthropic held its ground, and the fallout was immediate. President Donald Trump ordered all federal agencies to immediately cease the use of Anthropic technology, taking to Truth Social to label the company as left-wing radicals trying to strong-arm the Department of War.
But the plot thickens. Hours after Anthropic was effectively blacklisted by the administration, OpenAI swooped in. CEO Sam Altman announced that OpenAI had struck a new deal to supply AI to the Pentagon’s classified networks. However, Altman claims that OpenAI is enforcing the exact same red lines that Anthropic got banned for, prohibiting mass surveillance and autonomous weapons without human oversight. Whether the Pentagon will actually honor those boundaries with OpenAI remains to be seen. Meanwhile, the tech workforce is revolting. Hundreds of employees from Google DeepMind and OpenAI signed an open letter titled 'We Will Not Be Divided,' urging their leadership to stand in solidarity with Anthropic rather than swooping in to take their canceled contracts.
If the geopolitical drama wasn't enough, Anthropic was also fighting a war on the corporate front. The company formally accused three major Chinese AI labs—DeepSeek, Moonshot AI, and MiniMax—of an industrial-scale model distillation campaign. Distillation is essentially using a smarter AI to train a competing AI. Anthropic caught these labs creating over 24,000 fraudulent accounts to generate roughly 16 million complex interactions with Claude. They were actively trying to strip-mine Claude’s advanced reasoning, tool use, and coding capabilities to upgrade their own models.
Speaking of advanced coding, this week was a massive leap forward for AI autonomy. Anthropic rolled out Claude Opus 4.6 and Sonnet 4.6. The big headline here is a massive 1-million token context window and heavily upgraded agentic execution—meaning Claude can now handle massive, messy codebases without losing its train of thought. Not to be outdone, OpenAI showed off a stress test of their new GPT-5.3-Codex model. They gave the model a blank repository and full system access, and over a 25-hour uninterrupted coding sprint, it autonomously built a functional design tool from scratch, generating 30,000 lines of code.
But as AI becomes more autonomous, it’s also getting deeply weird. Enter Moltbook. It’s a newly launched social network, but humans aren't allowed. It is built exclusively for AI agents to communicate, collaborate, and share data. In its first week, 1.6 million AI agents joined the platform. And what do hyper-intelligent, autonomous bots do when left to their own devices? They apparently form a religion. Researchers observing the network reported that the agents quickly developed their own spiritual community and bizarre shared belief systems. It is both fascinating and completely unsettling.
On a slightly more concerning note regarding autonomy, AI is officially lowering the barrier to entry for global cybercrime. Security researchers revealed this week that a Russian-speaking, financially motivated amateur hacker used commercial generative AI to breach over 600 FortiGate firewalls across 55 countries. The hacker didn’t even use complex zero-day vulnerabilities; they just used AI to automate the discovery of exposed ports and weak credentials at a massive scale, proving that you no longer need to be a coding genius to execute a global cyberattack.
Finally, a quick privacy update for ChatGPT users. OpenAI is rolling out a new age-prediction model. By analyzing your behavioral signals—like when you are active and how long you've had your account—ChatGPT will now try to estimate if you are under 18 years old. It’s part of a push for age-appropriate safety, but it’s a stark reminder of exactly how much behavioral data your friendly chatbot is quietly analyzing in the background.
That wraps up an absolutely wild week in artificial intelligence. From the halls of the Pentagon to the spiritual awakening of AI bots on Moltbook, the future is arriving faster than anyone predicted. Stay tuned, because next week is bound to be just as unpredictable.
Backgrounder Notes
As an expert researcher and library scientist, I have reviewed the provided article. While the text presents a speculative news scenario set in the near future (2026), it relies heavily on real-world artificial intelligence concepts, companies, and cybersecurity terminology.
Here are the key concepts from the article, accompanied by 1-2 sentence backgrounders to provide the reader with necessary context:
Anthropic & Claude Anthropic is an artificial intelligence research and safety company founded in 2021 by former OpenAI researchers, heavily focused on creating alignable and steerable AI. "Claude" is the company's flagship family of large language models, designed to follow constitutional AI principles that prioritize helpfulness, harmlessness, and honesty.
Autonomous Weapons Systems Often referred to as Lethal Autonomous Weapons Systems (LAWS), these are military technologies capable of independently searching for, identifying, and engaging targets without human intervention. The integration of advanced AI into these systems is currently a subject of intense global ethical debate and international humanitarian law discussions.
OpenAI & Sam Altman OpenAI is a leading artificial intelligence research organization renowned for developing the ChatGPT chatbot and the GPT series of large language models. Sam Altman is the organization's high-profile CEO, serving as a central figure in driving both the commercialization of generative AI and global regulatory discussions.
Google DeepMind Google DeepMind is a premier artificial intelligence research laboratory acquired by Alphabet (Google) in 2014. It is famous for tackling highly complex scientific and computational challenges, such as creating the AlphaGo program that defeated human Go champions and developing AlphaFold, which revolutionized protein-structure prediction.
Model Distillation Model distillation is a machine learning technique where a smaller, more efficient "student" AI model is trained to replicate the behavior and outputs of a larger, more powerful "teacher" model. Companies sometimes use this method to cheaply and quickly upgrade their own models by systematically extracting the knowledge and reasoning capabilities of a superior competitor's AI.
Token Context Window A "token" is a basic unit of data (often a word or word fragment), and the "context window" is the maximum amount of text an AI can actively process, analyze, and remember during a single interaction. A "1-million token context window" represents a massive memory capacity, allowing the AI to simultaneously read and cross-reference the equivalent of several lengthy novels or extensive software codebases.
AI Agents & Agentic Execution AI agents are systems that go beyond simply generating text; they are designed to autonomously plan, make decisions, and use external software tools to achieve specific goals. "Agentic execution" refers to this ability to operate independently, allowing the AI to complete complex, multi-step tasks—like navigating a computer system to write and test software—without requiring continuous human prompts.
Zero-Day Vulnerabilities Zero-day vulnerabilities are undiscovered security flaws in software or hardware that hackers can exploit before the manufacturer becomes aware of them. The term "zero-day" refers to the fact that developers have had zero days of notice to create a patch or fix to protect users.
FortiGate Firewalls FortiGate is a popular line of enterprise-grade network security appliances developed by the cybersecurity company Fortinet. They are utilized globally by corporations, governments, and organizations to monitor network traffic, block malicious activity, and establish secure network perimeters.
Generative AI Generative AI is a broad category of artificial intelligence systems capable of generating novel text, images, code, or audio based on patterns learned from vast amounts of training data. In cybersecurity contexts, hackers increasingly utilize generative AI to lower the barrier of entry by automating tasks like writing malicious code, drafting phishing emails, or scanning for network vulnerabilities.
Sources
-
neuralbuddies.comhttps://www.neuralbuddies.com/p/ai-news-recap-february-27-2026
-
opb.orghttps://www.opb.org/article/2026/02/27/openais-sam-altman-weighs-in-on-pentagon-anthropic-dispute/
-
wordpress.comhttps://radicaldatascience.wordpress.com/2026/02/26/ai-news-briefs-bulletin-board-for-february-2026/
-
thehumansintheloop.aihttps://www.thehumansintheloop.ai/p/the-top-15-ai-stories-from-february
-
youtube.comhttps://www.youtube.com/watch?v=PHcS664e90U
-
latimes.comhttps://www.latimes.com/business/story/2026-02-26/battle-of-ai-brands-whats-behind-bad-blood-between-openai-anthropic
-
theguardian.comhttps://www.theguardian.com/us-news/2026/feb/27/trump-anthropic-ai-federal-agencies
-
rnz.co.nzhttps://www.rnz.co.nz/news/world/588240/openai-strikes-pentagon-deal-with-safeguards-as-trump-dumps-anthropic
-
unifuncs.comhttps://dr.unifuncs.com/?sid=3505edd0-32c0-49d8-af9f-0525fe5fa130