Deep Dive
Anthropic's Leaked Mythos Model Brings New Security Headaches
Anthropic accidentally left nearly 3,000 assets in a publicly accessible data cache, including draft documentation for a new model called Claude Mythos, also referred to internally as Capiara. This represents a new tier above the company's current top-tier Opus model. The leak revealed the model is already trained and undergoing testing with early enterprise customers. Anthropic confirmed it's working on a major step change in reasoning, coding, and cybersecurity capabilities. The company is being extremely cautious about rollout because internal testing shows Mythos is far ahead of competing models at finding and exploiting vulnerabilities. Anthropic has prior experience with this problem. A Chinese state-linked group once used Claude code in real attacks against 30 organizations including tech companies and financial institutions. Because of this risk, Anthropic is limiting initial access to enterprise customers and even planning an invite-only CEO retreat in the UK to give business leaders early exposure to unreleased capabilities.
Meta's Brain-Reading AI Predicts How Humans Respond to Content
Meta's Fair team introduced Tribe V2, an AI system that predicts how the human brain responds to video, audio, and language by combining Llama 3.2, VJEPA 2, and WAV2VEC 2.0 into a shared transformer model. The system was trained on 451.6 hours of actual fMRI brain scan data from 25 people and then evaluated on 1,117.7 hours of data from 720 people. It predicts activity across more than 20,000 cortical points with remarkable precision. What's striking is that Tribe V2 can make zero-shot predictions on new people without additional training, and in some cases those predictions better capture average group brain responses than real individual recordings. When given just one hour of new data, the model improves significantly and beats older linear methods by two to four times. Meta says the model can run virtual brain experiments in silico and naturally organized itself around five major brain networks including auditory, language, motion, default mode, and visual processing.
Gwen Claw Builds AI Agents That Actually Finish Tasks
Gwen Claw attacks one of AI's core problems. Most agents sound competent in conversation but lose track halfway through real work when priorities shift or tasks change. Gwen Claw uses a three-layer memory system with a stable identity layer, long-term background layer, and dynamic trajectory layer to keep your broader context, working history, and current task state all running simultaneously. It also uses context slimming to cut junk data while preserving important details, preventing the system from drowning in its own context or running up massive token costs. Unlike most agents that work in isolated demo environments, Gwen Claw operates directly in your local browser where it can use real login states, cookies, and cached information like a normal user would. The killer feature is self-evolution. Instead of staying frozen after launch, Gwen Claw logs failures and negative feedback, analyzes root causes, and turns those insights into targeted improvements. That means the agent gets smarter through repeated real use. It integrates with Huawei, ChatGPT, Telegram, WhatsApp, and web access, and supports private deployment for companies worried about data control.
Alibaba's New RISC-V Chip Sidesteps US Export Controls
Alibaba revealed the Schwantc950 CPU designed specifically for AI agent workloads. While most people focus on GPUs for training, Alibaba is betting that CPUs matter more than people think, especially for inference where agents run multi-step actions. The chip claims more than 30% performance improvement over mainstream competitors because it can be customized for specific inference patterns. The strategic move is that Schwantc950 uses the open RISC-V architecture instead of proprietary ARM, which means Alibaba avoids licensing fees and maintains independence from Western chip designs. This matters because Chinese companies face heavy US export restrictions on advanced Nvidia GPUs. Alibaba doesn't sell these chips to other companies. Instead it uses them internally to strengthen its own cloud AI services. Analysts say the real value isn't in immediate revenue but in building supply chain resilience, reducing costs, and maintaining control over AI computing power at a time when access is becoming harder to secure.