Dwarkesh Patel × Jensen Huang Interview — Key Questions & Timestamps https://www.youtube.com/watch?v=Hrbq66XqtCo
-
Commoditization (00:06) — If software gets commoditized by AI, does Nvidia's hardware business get commoditized as well, since TSMC and others handle the actual manufacturing?
-
Scarce Components (04:28) — With over $100B in purchase commitments, is Nvidia's true moat simply locking up years of scarce components (logic and memory) so competitors can't physically build their accelerators?
-
Scaling Upstream (08:32) — How do you continue to 2x production year-over-year when you are already taking up the vast majority of TSMC's advanced nodes? Are we entering a regime where AI compute growth has to slow down because of the upstream supply chain?
-
EUV Bottlenecks (13:58) — How do you physically manufacture 2x the logic when you are fundamentally bottlenecked by the production of EUV machines? Do you go directly to ASML to push for more capacity?
-
The Threat of TPUs (16:25) — Given that frontier models like Claude and Gemini were trained on Google's TPUs, what does that mean for Nvidia going forward?
-
Specialized vs. Flexible Architectures (20:01) — If AI workloads are primarily highly predictable matrix multiplies, isn't a TPU—which is optimized purely for that without giving up die area for flexible thread scheduling—better suited for the bulk of this growth than a GPU?
-
Hyperscalers Building Replacements (24:50) — If your biggest hyperscaler customers (Google, Amazon, etc.) have the resources to write their own custom kernels instead of using CUDA, to what extent is CUDA really the moat keeping Frontier AI on Nvidia hardware?
-
Sustaining Margins (29:18) — Can Nvidia sustain its massive 70%+ margins if hyperscalers can afford to build their own hardware and software stacks, and the buying decision just comes down to raw price-to-performance?
-
Defecting to Custom Silicon (36:26) — If Nvidia's performance per watt is indeed the best, why are companies like Anthropic and Google signing multi-gigawatt deals for TPUs and other custom accelerators?
-
Missed Early Investments (41:07) — Nvidia had the cash and gave OpenAI and Anthropic compute when they were worth a fraction of their current valuations. Why didn't Nvidia invest much earlier, or just become a foundation lab itself?
-
Why Not Build a Cloud? (43:21) — Given Nvidia's cash reserves and the expensive upfront capex required for AI chips, why doesn't Nvidia just become a hyperscaler itself instead of backing "neo-clouds" like CoreWeave?
-
Picking Winners (47:23) — You stated that you don't pick winners among AI startups, but you also noted that certain neo-clouds wouldn't exist without Nvidia's backing. How are those two stances compatible?
-
GPU Allocation (54:07) — When deciding who gets scarce GPUs, why do you allocate them based on who is ready rather than simply selling to the highest bidder?
(Dwarkesh plays devil's advocate here and spends a massive portion of the interview pressing Jensen on this specific topic)
-
The Security Threat (57:39) — If Chinese labs get enough chips to train highly capable cyber-offensive models (like Anthropic's Claude Mythos), wouldn't that be a massive threat to American security?
-
The Flop Disparity (1:03:41) — Doesn't it matter that the US gets to frontier capabilities first because of a massive computing advantage, giving us time to patch vulnerabilities before China catches up?
-
Manufacturing Constraints (1:05:45) — Can China actually manufacture enough chips on 7nm nodes to keep up, and does simply throwing an abundance of energy at older nodes really make up the difference?
-
The Boeing Analogy (1:17:19) — Quoting Dario Amodei, Dwarkesh asks: If the compute powers a dangerous weapon-like capability, isn't selling them the chips analogous to selling them the missile casings?
-
Winning in the Long Term (1:19:13) — How does selling chips to China now actually help the US win the technology race in the long term?
-
Contradictory Statements (1:24:19) — Dwarkesh asks Jensen to clarify a perceived contradiction: How can we win because our chips are vastly better, while simultaneously arguing that China will easily build the exact same capabilities themselves if we leave the market?
-
Acknowledging the Cost (1:25:52) — Dwarkesh repeatedly pushes Jensen to simply acknowledge that giving an adversary the compute to train offensive models carries a severe potential cost.
-
Going Backwards to Meet Demand (1:35:07) — If Nvidia runs out of advanced N3 capacity, could you ever see a world before 2030 where you go back to older N7 nodes just to pump out more volume to meet insatiable AI demand?
-
Parallel Architectures (1:36:43) — Why doesn't Nvidia take its massive engineering resources and run totally different chip architectures in parallel (like a wafer-scale chip, or one without CUDA) to hedge against AI paradigms shifting?
-
The "No Deep Learning" Scenario (1:39:39) — If the deep learning revolution had never happened, what would Nvidia be doing today besides just powering video games?