Powered by RND
PodcastsTechnologiesThe Gradient: Perspectives on AI
Écoutez The Gradient: Perspectives on AI dans l'application
Écoutez The Gradient: Perspectives on AI dans l'application
(48 139)(250 169)
Sauvegarde des favoris
Réveil
Minuteur

The Gradient: Perspectives on AI

Podcast The Gradient: Perspectives on AI
Daniel Bashir
Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com

Épisodes disponibles

5 sur 147
  • 2024 in AI, with Nathan Benaich
    Episode 142Happy holidays! This is one of my favorite episodes of the year — for the third time, Nathan Benaich and I did our yearly roundup of all the AI news and advancements you need to know. This includes selections from this year’s State of AI Report, some early takes on o3, a few minutes LARPing as China Guys……… If you’ve stuck around and continue to listen, I’m really thankful you’re here. I love hearing from you. You can find Nathan and Air Street Press here on Substack and on Twitter, LinkedIn, and his personal site. Check out his writing at press.airstreet.com. Find me on Twitter (or LinkedIn if you want…) for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Outline* (00:00) Intro* (01:00) o3 and model capabilities + “reasoning” capabilities* (05:30) Economics of frontier models* (09:24) Air Street’s year and industry shifts: product-market fit in AI, major developments in science/biology, "vibe shifts" in defense and robotics* (16:00) Investment strategies in generative AI, how to evaluate and invest in AI companies* (19:00) Future of BioML and scientific progress: on AlphaFold 3, evaluation challenges, and the need for cross-disciplinary collaboration* (32:00) The “AGI” question and technology diffusion: Nathan’s take on “AGI” and timelines, technology adoption, the gap between capabilities and real-world impact* (39:00) Differential economic impacts from AI, tech diffusion* (43:00) Market dynamics and competition* (50:00) DeepSeek and global AI innovation* (59:50) A robotics renaissance? robotics coming back into focus + advances in vision-language models and real-world applications* (1:05:00) Compute Infrastructure: NVIDIA’s dominance, GPU availability, the competitive landscape in AI compute* (1:12:00) Industry consolidation: partnerships, acquisitions, regulatory concerns in AI* (1:27:00) Global AI politics and regulation: international AI governance and varying approaches* (1:35:00) The regulatory landscape* (1:43:00) 2025 predictions * (1:48:00) ClosingLinks and ResourcesFrom Air Street Press:* The State of AI Report* The State of Chinese AI* Open-endedness is all we’ll need* There is no scaling wall: in discussion with Eiso Kant (Poolside)* Alchemy doesn’t scale: the economics of general intelligence* Chips all the way down* The AI energy wars will get worse before they get betterOther highlights/resources:* Deepseek: The Quiet Giant Leading China’s AI Race — an interview with DeepSeek CEO Liang Wenfeng via ChinaTalk, translated by Jordan Schneider, Angela Shen, Irene Zhang and others* A great position paper on open-endedness by Minqi Jiang, Tim Rocktäschel, and Ed Grefenstette — Minqi also wrote a blog post on this for us!* for China Guys only: China’s AI Regulations and How They Get Made by Matt Sheehan (+ an interview I did with Matt in 2022!)* The Simple Macroeconomics of AI by Daron Acemoglu + a critique by Maxwell Tabarrok (more links in the Report)* AI Nationalism by Ian Hogarth (from 2018)* Some analysis on the EU AI Act + regulation from Lawfare Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:48:43
  • Philip Goff: Panpsychism as a Theory of Consciousness
    Episode 141I spoke with Professor Philip Goff about:* What a “post-Galilean” science of consciousness looks like* How panpsychism helps explain consciousness and the hybrid cosmopsychist viewEnjoy!Philip Goff is a British author, idealist philosopher, and professor at Durham University whose research focuses on philosophy of mind and consciousness. Specifically, it focuses on how consciousness can be part of the scientific worldview. He is the author of multiple books including Consciousness and Fundamental Reality, Galileo's Error: Foundations for a New Science of Consciousness and Why? The Purpose of the Universe.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:05) Goff vs. Carroll on the Knowledge Arguments and explanation* (08:00) Preferences for theories* (12:55) Curiosity (Grounding, Essence) and the Knowledge Argument* (14:40) Phenomenal transparency and physicalism vs. anti-physicalism* (29:00) How Exactly does Panpsychism Help Explain Consciousness* (30:05) The argument for hybrid cosmopsychism* (36:35) “Bare” subjects / subjects before inheriting phenomenal properties* (40:35) Bundle theories of the self* (43:35) Fundamental properties and new subjects as causal powers* (50:00) Integrated Information Theory* (55:00) Fundamental assumptions in hybrid cosmopsychism* (1:00:00) OutroLinks:* Philip’s homepage and Twitter* Papers* Putting Consciousness First* Curiosity (Grounding, Essence) and the Knowledge Argument Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:00:04
  • Some Changes at The Gradient
    Hi everyone!If you’re a new subscriber or listener, welcome. If you’re not new, you’ve probably noticed that things have slowed down from us a bit recently. Hugh Zhang, Andrey Kurenkov and I sat down to recap some of The Gradient’s history, where we are now, and how things will look going forward. To summarize and give some context:The Gradient has been around for around 6 years now – we began as an online magazine, and began producing our own newsletter and podcast about 4 years ago. With a team of volunteers — we take in a bit of money through Substack that we use for subscriptions to tools we need and try to pay ourselves a bit — we’ve been able to keep this going for quite some time. Our team has less bandwidth than we’d like right now (and I’ll admit that at least some of us are running on fumes…) — we’ll be making a few changes:* Magazine: We’re going to be scaling down our editing work on the magazine. While we won’t be accepting pitches for unwritten drafts for now, if you have a full piece that you’d like to pitch to us, we’ll consider posting it. If you’ve reached out about writing and haven’t heard from us, we’re really sorry. We’ve tried a few different arrangements to manage the pipeline of articles we have, but it’s been difficult to make it work. We still want this to be a place to promote good work and writing from the ML community, so we intend to continue using this Substack for that purpose. If we have more editing bandwidth on our team in the future, we want to continue doing that work. * Newsletter: We’ll aim to continue the newsletter as before, but with a “Best from the Community” section highlighting posts. We’ll have a way for you to send articles you want to be featured, but for now you can reach us at our [email protected]. * Podcast: I’ll be continuing this (at a slower pace), but eventually transition it away from The Gradient given the expanded range. If you’re interested in following, it might be worth subscribing on another player like Apple Podcasts, Spotify, or using the RSS feed.* Sigmoid Social: We’ll keep this alive as long as there’s financial support for it.If you like what we do and/or want to help us out in any way, do reach out to [email protected]. We love hearing from you.Timestamps* (0:00) Intro* (01:55) How The Gradient began* (03:23) Changes and announcements* (10:10) More Gradient history! On our involvement, favorite articles, and some plugsSome of our favorite articles!There are so many, so this is very much a non-exhaustive list:* NLP’s ImageNet moment has arrived* The State of Machine Learning Frameworks in 2019* Why transformative artificial intelligence is really, really hard to achieve* An Introduction to AI Story Generation* The Artificiality of Alignment (I didn’t mention this one in the episode, but it should be here)Places you can find us!Hugh:* Twitter* Personal site* Papers/things mentioned!* A Careful Examination of LLM Performance on Grade School Arithmetic (GSM1k)* Planning in Natural Language Improves LLM Search for Code Generation* Humanity’s Last ExamAndrey:* Twitter* Personal site* Last Week in AI PodcastDaniel:* Twitter* Substack blog* Personal site (under construction) Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    34:25
  • Jacob Andreas: Language, Grounding, and World Models
    Episode 140I spoke with Professor Jacob Andreas about:* Language and the world* World models* How he’s developed as a scientistEnjoy!Jacob is an associate professor at MIT in the Department of Electrical Engineering and Computer Science as well as the Computer Science and Artificial Intelligence Laboratory. His research aims to understand the computational foundations of language learning, and to build intelligent systems that can learn from human guidance. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has received a Sloan fellowship, an NSF CAREER award, MIT's Junior Bose and Kolokotrones teaching awards, and paper awards at ACL, ICML and NAACL.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:40) Jacob’s relationship with grounding fundamentalism* (05:21) Jacob’s reaction to LLMs* (11:24) Grounding language — is there a philosophical problem?* (15:54) Grounding and language modeling* (24:00) Analogies between humans and LMs* (30:46) Grounding language with points and paths in continuous spaces* (32:00) Neo-Davidsonian formal semantics* (36:27) Evolving assumptions about structure prediction* (40:14) Segmentation and event structure* (42:33) How much do word embeddings encode about syntax?* (43:10) Jacob’s process for studying scientific questions* (45:38) Experiments and hypotheses* (53:01) Calibrating assumptions as a researcher* (54:08) Flexibility in research* (56:09) Measuring Compositionality in Representation Learning* (56:50) Developing an independent research agenda and developing a lab culture* (1:03:25) Language Models as Agent Models* (1:04:30) Background* (1:08:33) Toy experiments and interpretability research* (1:13:30) Developing effective toy experiments* (1:15:25) Language Models, World Models, and Human Model-Building* (1:15:56) OthelloGPT’s bag of heuristics and multiple “world models”* (1:21:32) What is a world model?* (1:23:45) The Big Question — from meaning to world models* (1:28:21) From “meaning” to precise questions about LMs* (1:32:01) Mechanistic interpretability and reading tea leaves* (1:35:38) Language and the world* (1:38:07) Towards better language models* (1:43:45) Model editing* (1:45:50) On academia’s role in NLP research* (1:49:13) On good science* (1:52:36) OutroLinks:* Jacob’s homepage and Twitter* Language Models, World Models, and Human Model-Building* Papers* Semantic Parsing as Machine Translation (2013)* Grounding language with points and paths in continuous spaces (2014)* How much do word embeddings encode about syntax? (2014)* Translating neuralese (2017)* Analogs of linguistic structure in deep representations (2017)* Learning with latent language (2018)* Learning from Language (2018)* Measuring Compositionality in Representation Learning (2019)* Experience grounds language (2020)* Language Models as Agent Models (2022) Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:52:43
  • Evan Ratliff: Our Future with Voice Agents
    Episode 139I spoke with Evan Ratliff about:* Shell Game, Evan’s new podcast, where he creates an AI voice clone of himself and sets it loose. * The end of the Longform Podcast and his thoughts on the state of journalism. Enjoy!Evan is an award-winning investigative journalist, bestselling author, podcast host, and entrepreneur. He’s the author of the The Mastermind: A True Story of Murder, Empire, and a New Kind of Crime Lord; the writer and host of the hit podcasts Shell Game and Persona: The French Deception; and the cofounder of The Atavist Magazine, Pop-Up Magazine, and the Longform Podcast. As a writer, he’s a two-time National Magazine Award finalist. As an editor and producer, he’s a two-time Emmy nominee and National Magazine Award winner.Find me on Twitter for updates on new episodes, and reach me at [email protected] for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (01:05) Evan’s ambitious and risky projects* (04:45) Wearing different personas as a journalist* (08:31) Boundaries and acceptability in using voice agents* (11:42) Impacts on other people* (13:12) “The kids these days” — how will new technologies impact younger people?* (17:12) Evan’s approach to children’s technology use* (20:05) Techno-solutionism and improvements in medicine, childcare* (24:15) Evan’s perspective on simulations of people* (27:05) On motivations for building tech startups* (30:42) Evan’s outlook for Shell Game’s impact and motivations for his work* (36:05) How Evan decided to write for a career* (40:02) How voice agents might impact our conversations* (43:52) Evan’s experience with Longform and podcasting* (47:15) Perspectives on doing good interviews* (52:11) Mimicking and inspiration, developing style* (57:15) Writers and their motivations, the state of longform journalism* (1:06:15) The internet and writing* (1:09:41) On the ending of Longform* (1:19:48) OutroLinks:* Evan’s homepage and Twitter* Shell Game, Evan’s new podcast* Longform Podcast Get full access to The Gradient at thegradientpub.substack.com/subscribe
    --------  
    1:19:59

Plus de podcasts Technologies

À propos de The Gradient: Perspectives on AI

Deeply researched, technical interviews with experts thinking about AI and technology. thegradientpub.substack.com
Site web du podcast

Écoutez The Gradient: Perspectives on AI, Underscore_ ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v7.1.1 | © 2007-2025 radio.de GmbH
Generated: 1/4/2025 - 4:44:07 AM