Powered by RND
Écoutez Interconnects dans l'application
Écoutez Interconnects dans l'application
(48 139)(250 169)
Sauvegarde des favoris
Réveil
Minuteur

Interconnects

Podcast Interconnects
Nathan Lambert
Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, an...

Épisodes disponibles

5 sur 69
  • (Voiceover) OpenAI's o3: The grand finale of AI in 2024
    Original post: https://www.interconnects.ai/p/openais-o3-the-2024-finale-of-aiChapters00:00 Introduction02:51 o3 overview05:57 Solving the Abstraction and Reasoning Corpus (ARC)10:41 o3’s architecture, cost, and training (hint: still no tree search)16:36 2024: RL returnsFiguresFig 1, Frontier Math resultsFig 2, Coding resultsFig 3, ARC AGI resultsFig 4, ARC AGI result detailsFig 5, ARC AGI example 1Fig 6, ARC AGI example in textFig 7, ARC AGI example “easy” Get full access to Interconnects at www.interconnects.ai/subscribe
    --------  
    17:58
  • (Voiceover) The AI agent spectrum
    Original post: https://www.interconnects.ai/p/the-ai-agent-spectrumChapters00:00 Introduction03:24 Agent cartography08:02 Questions for the near futureFiguresFig 1. multiple feedbacks diagram Get full access to Interconnects at www.interconnects.ai/subscribe
    --------  
    11:00
  • (Voiceover) OpenAI's Reinforcement Finetuning and RL for the masses
    Original post: https://www.interconnects.ai/p/openais-reinforcement-finetuningChapters00:00 Introduction04:19 The impact of reinforcement finetuning’s existence07:29 Hypotheses on reinforcement finetuning’s implementationFiguresFig. 1, Yann’s CakeFig. 2, Grader configFig. 3, RLVR learning curves Get full access to Interconnects at www.interconnects.ai/subscribe
    --------  
    12:40
  • Interviewing Finbarr Timbers on the "We are So Back" Era of Reinforcement Learning
    Finbarr Timbers is an AI researcher who writes Artificial Fintelligence — one of the technical AI blog’s I’ve been recommending for a long time — and has a variety of experiences at top AI labs including DeepMind and Midjourney. The goal of this interview was to do a few things:* Revisit what reinforcement learning (RL) actually is, its origins, and its motivations.* Contextualize the major breakthroughs of deep RL in the last decade, from DQN for Atari to AlphaZero to ChatGPT. How could we have seen the resurgence coming? (see the timeline below for the major events we cover)* Modern uses for RL, o1, RLHF, and the future of finetuning all ML models.* Address some of the critiques like “RL doesn’t work yet.”It was a fun one. Listen on Apple Podcasts, Spotify, YouTube, and where ever you get your podcasts. For other Interconnects interviews, go here.Timeline of RL and what was happening at the timeIn the last decade of deep RL, there have been a few phases.* Era 1: Deep RL fundamentals — when modern algorithms we designed and proven.* Era 2: Major projects — AlphaZero, OpenAI 5, and all the projects that put RL on the map.* Era 3: Slowdown — when DeepMind and OpenAI no longer had the major RL projects and cultural relevance declined.* Era 4: RLHF & widening success — RL’s new life post ChatGPT.Covering these is the following events. This is incomplete, but enough to inspire a conversation.Early era: TD Gammon, REINFORCE, Etc2013: Deep Q Learning (Atari)2014: Google acquires DeepMind2016: AlphaGo defeats Lee Sedol2017: PPO paper, AlphaZero (no human data)2018: OpenAI Five, GPT 22019: AlphaStar, robotic sim2real with RL early papers (see blog post)2020: MuZero2021: Decision Transformer2022: ChatGPT, sim2real continues.2023: Scaling laws for RL (blog post), doubt of RL2024: o1, post-training, RL’s bloomInterconnects is a reader-supported publication. Consider becoming a subscriber.Chapters* [00:00:00] Introduction* [00:02:14] Reinforcement Learning Fundamentals* [00:09:03] The Bitter Lesson* [00:12:07] Reward Modeling and Its Challenges in RL* [00:16:03] Historical Milestones in Deep RL* [00:21:18] OpenAI Five and Challenges in Complex RL Environments* [00:25:24] Recent-ish Developments in RL: MuZero, Decision Transformer, and RLHF* [00:30:29] OpenAI's O1 and Exploration in Language Models* [00:40:00] Tülu 3 and Challenges in RL Training for Language Models* [00:46:48] Comparing Different AI Assistants* [00:49:44] Management in AI Research* [00:55:30] Building Effective AI Teams* [01:01:55] The Need for Personal BrandingWe mention* O1 (OpenAI model)* Rich Sutton* University of Alberta* London School of Economics* IBM’s Deep Blue* Alberta Machine Intelligence Institute (AMII)* John Schulman* Claude (Anthropic's AI assistant)* Logan Kilpatrick* Bard (Google's AI assistant)* DeepSeek R1 Lite* Scale AI* OLMo (AI2's language model)* Golden Gate Claude Get full access to Interconnects at www.interconnects.ai/subscribe
    --------  
    1:08:33
  • (Voiceover) OpenAI's o1 using "search" was a PSYOP
    Original post: https://www.interconnects.ai/p/openais-o1-using-search-was-a-psyopFiguresFigure 0: OpenAI’s seminal test-time compute plotFigure 1: Setup for bucketed evalsFigure 2: Evals with correctness labelsFigure 3: Grouped evalsFigure 4: Hypothetical inference scaling law Get full access to Interconnects at www.interconnects.ai/subscribe
    --------  
    12:13

Plus de podcasts Technologies

À propos de Interconnects

Audio essays about the latest developments in AI and interviews with leading scientists in the field. Breaking the hype, understanding what's under the hood, and telling stories. www.interconnects.ai
Site web du podcast

Écoutez Interconnects, Monde Numérique (Actu des Technologies) ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités

Interconnects: Podcasts du groupe

Applications
Réseaux sociaux
v7.1.1 | © 2007-2024 radio.de GmbH
Generated: 12/26/2024 - 11:16:41 PM