Deep Dive
Why chips matter more than oil
InvestAnswers frames semiconductors as a zero-sum geopolitical struggle, not a mere technology race. Chips power everything in the 21st century — electrical grids, military systems, autonomous vehicles, even pet ID tags — making control over chip supply and design the clearest predictor of which nation wins future conflicts. The speaker emphasizes that Taiwan and TSMC represent a single point of failure for the entire global AI infrastructure. The current administration understands this vulnerability, hence the push toward domestic chip manufacturing. But the West is losing ground. China is aggressively building its own stack at massive scale, driven by energy abundance and manufacturing capability. Without a Manhattan Project-level commitment from the US and Europe, the speaker warns we'll all end up dependent on Chinese AI systems within five years. This isn't hyperbole — it's the calculus reshaping trillion-dollar government spending and corporate R&D priorities.
Nvidia's dominance and the challengers emerging
Jensen Huang and Nvidia control the throne right now with their Blackwell chip, which the speaker calls the universal brain powering every major AI model, hyperscaler, and sovereign AI deployment globally. Blackwell's real advantage isn't just hardware but CUDA — hundreds of billions in cumulative software investment across libraries, frameworks, and ecosystems that took 15 years to build and would take competitors five to ten years to replicate even with AI assistance. However, rivals are striking back. Google has TPUs, Amazon has Trranium, and Anthropic just tripled its valuation to over $800 billion by making a 3.5-gigawatt TPU bet for its LLM backbone. Anthropic is the exception to Nvidia's dominance. But Google's TPUs and Amazon's Trranium still can't match Nvidia head-to-head on benchmarks. The real problem for everyone — including Nvidia — is TSMC, the Korean semiconductor foundry that manufactures these designs. Nvidia has locked up over 50% of TSMC's future capacity, leaving competitors fighting over scraps.
Tesla's chip strategy and edge inference dominance
Elon Musk has been designing custom chips for years because Nvidia's offerings were too expensive, consumed too much power, and didn't fit Tesla's software stack. Now Tesla has taped out the AI5 — a single-chip system-on-chip (SOC) with onboard memory, far more powerful than the two separate AI4 chips currently in vehicles like the Cybertruck and Model Y. The kicker: AI5 uses one-third the power of Blackwell at 10% the cost, and it's optimized for inference at the edge — inside cars and robots — not cloud data centers. Tesla is working on AI6 already, which will go into more advanced Optimus robots. The timeline matters: AI5 in 2027, AI6 in 2028, followed by D3 (radiation-hardened space chips) in 2029 for Starlink and space-based data centers. Tesla and Intel are partnering to ramp manufacturing. But the real wild card is Terrafab, a 100-million-square-foot chip fab under construction in Texas — 10 times larger than any existing Tesla Gigafactory. It will be visible from the moon and funded by Tesla, SpaceX, and xAI combined.
The inference-first future reshaping compute
The speaker has been hammering on inference for two years, and now the entire market is catching up. Data centers are built for training massive models in the cloud. But the world is moving to physical AGI — meaning inference has to happen inside the machine, whether that's a robot, car, drone, or spacecraft. Reaction time and decision-making must be instantaneous and local. Nvidia dominates training in cloud data centers, but Tesla is betting the future is 99% inference and 1% learning. McKenzie recently validated this thesis: AI inference will see the most explosive growth over the next four to five years. Here's the wild card: Tesla owns a fleet of millions of vehicles, each with AI4 and soon AI5 chips. Cars sit idle most of the day. What if Tesla turns those chips into a mobile distributed supercomputer, allowing car owners to earn money by renting their vehicle's processing power when parked? Wall Street hasn't priced this in, but the speaker has modeled it into his price predictions for three years. Tesla could own the largest distributed supercomputer on Earth and in space.
The semiconductor ecosystem and investment thesis
The speaker reveals a chart showing the top earnings growers over the next three years: Tesla, Palantir, Broadcom, Micron, SanDisk, Amazon, Nvidia, ASML, Meta, and others. He owns the top four. Crucially, nine of the top 13 fastest-growing companies are semiconductor or chip-adjacent. This is not coincidence — chips are the new oil. There are six categories to own: chip designers (Tesla, AMD, Broadcom, Nvidia, Apple), foundries (TSMC, Samsung, Intel), memory specialists (Micron, Samsung), manufacturing equipment (ASML), integrated device manufacturers (Intel, Texas Instruments, Samsung), and critical infrastructure players (ARM). The semiconductor market is projected to reach $1.6 trillion by 2030 at 55% compound annual growth, driven by physical AI. The speaker's core message: position yourself in this sector before valuations explode. By 2028, chip stocks will be too expensive. Massive disruption and casualties will happen along the way, but those who own exposure to edge inference and distributed compute will see generational returns. He predicts the first hundred-trillion-dollar company will emerge from this space if execution succeeds.