How Will We Cooperate with AIs? (with Allison Duettmann)
On this episode, Allison Duettmann joins me to discuss centralized versus decentralized AI, how international governance could shape AI’s trajectory, how we might cooperate with future AIs, and the role of AI in improving human decision-making. We also explore which lessons from history apply to AI, the future of space law and property rights, whether technology is invented or discovered, and how AI will impact children. You can learn more about Allison's work at: https://foresight.org Timestamps: 00:00:00 Preview 00:01:07 Centralized AI versus decentralized AI 00:13:02 Risks from decentralized AI 00:25:39 International AI governance 00:39:52 Cooperation with future AIs 00:53:51 AI for decision-making 01:05:58 Capital intensity of AI 01:09:11 Lessons from history 01:15:50 Future space law and property rights 01:27:28 Is technology invented or discovered? 01:32:34 Children in the age of AI
--------
1:36:02
Brain-like AGI and why it's Dangerous (with Steven Byrnes)
On this episode, Steven Byrnes joins me to discuss brain-like AGI safety. We discuss learning versus steering systems in the brain, the distinction between controlled AGI and social-instinct AGI, why brain-inspired approaches might be our most plausible route to AGI, and honesty in AI models. We also talk about how people can contribute to brain-like AGI safety and compare various AI safety strategies. You can learn more about Steven's work at: https://sjbyrnes.com/agi.html Timestamps: 00:00 Preview 00:54 Brain-like AGI Safety 13:16 Controlled AGI versus Social-instinct AGI 19:12 Learning from the brain 28:36 Why is brain-like AI the most likely path to AGI? 39:23 Honesty in AI models 44:02 How to help with brain-like AGI safety 53:36 AI traits with both positive and negative effects 01:02:44 Different AI safety strategies
--------
1:13:13
How Close Are We to AGI? Inside Epoch's GATE Model (with Ege Erdil)
On this episode, Ege Erdil from Epoch AI joins me to discuss their new GATE model of AI development, what evolution and brain efficiency tell us about AGI requirements, how AI might impact wages and labor markets, and what it takes to train models with long-term planning. Toward the end, we dig into Moravec’s Paradox, which jobs are most at risk of automation, and what could change Ege's current AI timelines. You can learn more about Ege's work at https://epoch.ai Timestamps: 00:00:00 – Preview and introduction 00:02:59 – Compute scaling and automation - GATE model 00:13:12 – Evolution, Brain Efficiency, and AGI Compute Requirements 00:29:49 – Broad Automation vs. R&D-Focused AI Deployment 00:47:19 – AI, Wages, and Labor Market Transitions 00:59:54 – Training Agentic Models and Long-Term Planning Capabilities 01:06:56 – Moravec’s Paradox and Automation of Human Skills 01:13:59 – Which Jobs Are Most Vulnerable to AI? 01:33:00 – Timeline Extremes: What Could Change AI Forecasts?
--------
1:34:33
Special: Defeating AI Defenses (with Nicholas Carlini and Nathan Labenz)
In this special episode, we feature Nathan Labenz interviewing Nicholas Carlini on the Cognitive Revolution podcast. Nicholas Carlini works as a security researcher at Google DeepMind, and has published extensively on adversarial machine learning and cybersecurity. Carlini discusses his pioneering work on adversarial attacks against image classifiers, and the challenges of ensuring neural network robustness. He examines the difficulties of defending against such attacks, the role of human intuition in his approach, open-source AI, and the potential for scaling AI security research. 00:00 Nicholas Carlini's contributions to cybersecurity08:19 Understanding attack strategies 29:39 High-dimensional spaces and attack intuitions 51:00 Challenges in open-source model safety 01:00:11 Unlearning and fact editing in models 01:10:55 Adversarial examples and human robustness 01:37:03 Cryptography and AI robustness 01:55:51 Scaling AI security research
--------
2:23:12
Keep the Future Human (with Anthony Aguirre)
On this episode, I interview Anthony Aguirre, Executive Director of the Future of Life Institute, about his new essay Keep the Future Human: https://keepthefuturehuman.ai AI companies are explicitly working toward AGI and are likely to succeed soon, possibly within years. Keep the Future Human explains how unchecked development of smarter-than-human, autonomous, general-purpose AI systems will almost inevitably lead to human replacement. But it doesn't have to. Learn how we can keep the future human and experience the extraordinary benefits of Tool AI... Timestamps: 00:00 What situation is humanity in? 05:00 Why AI progress is fast 09:56 Tool AI instead of AGI 15:56 The incentives of AI companies 19:13 Governments can coordinate a slowdown 25:20 The need for international coordination 31:59 Monitoring training runs 39:10 Do reasoning models undermine compute governance? 49:09 Why isn't alignment enough? 59:42 How do we decide if we want AGI? 01:02:18 Disagreement about AI 01:11:12 The early days of AI risk
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change.
The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions.
FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.