Powered by RND
PodcastsTechnologiesVanishing Gradients
Écoutez Vanishing Gradients dans l'application
Écoutez Vanishing Gradients dans l'application
(48 139)(250 169)
Sauvegarde des favoris
Réveil
Minuteur

Vanishing Gradients

Podcast Vanishing Gradients
Hugo Bowne-Anderson
A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson. It's time for more critical conversations about the challenges in our ind...

Épisodes disponibles

5 sur 44
  • Episode 44: The Future of AI Coding Assistants: Who’s Really in Control?
    AI coding assistants are reshaping how developers write, debug, and maintain code—but who’s really in control? In this episode, Hugo speaks with Tyler Dunn, CEO and co-founder of Continue, an open-source AI-powered code assistant that gives developers more customization and flexibility in their workflows. In this episode, we dive into: - The trade-offs between proprietary vs. open-source AI coding assistants—why open-source might be the future. - How structured workflows, modular AI, and customization help developers maintain control over their tools. - The evolution of AI-powered coding, from autocomplete to intelligent code suggestions and beyond. - Why the best developer experiences come from sensible defaults with room for deeper configuration. - The future of LLM-based software engineering, where fine-tuning models on personal and team-level data could make AI coding assistants even more effective. With companies increasingly integrating AI into development workflows, this conversation explores the real impact of these tools—and the importance of keeping developers in the driver's seat. LINKS The podcast livestream on YouTube (https://youtube.com/live/8QEgVCzm46U?feature=share) Continue's website (https://www.continue.dev/) Continue is hiring! (https://www.continue.dev/about-us) amplified.dev: We believe in a future where developers are amplified, not automated (https://amplified.dev/) Beyond Prompt and Pray, Building Reliable LLM-Powered Software in an Agentic World (https://www.oreilly.com/radar/beyond-prompt-and-pray/) LLMOps Lessons Learned: Navigating the Wild West of Production LLMs 🚀 (https://www.zenml.io/blog/llmops-lessons-learned-navigating-the-wild-west-of-production-llms) Building effective agents by Erik Schluntz and Barry Zhang, Anthropic (https://www.anthropic.com/research/building-effective-agents) Ty on LinkedIn (https://www.linkedin.com/in/tylerjdunn/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Vanishing Gradients on Twitter (https://x.com/vanishingdata) Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
    --------  
    1:34:11
  • Episode 43: Tales from 400+ LLM Deployments: Building Reliable AI Agents in Production
    Hugo speaks with Alex Strick van Linschoten, Machine Learning Engineer at ZenML and creator of a comprehensive LLMOps database documenting over 400 deployments. Alex's extensive research into real-world LLM implementations gives him unique insight into what actually works—and what doesn't—when deploying AI agents in production. In this episode, we dive into: - The current state of AI agents in production, from successes to common failure modes - Practical lessons learned from analyzing hundreds of real-world LLM deployments - How companies like Anthropic, Klarna, and Dropbox are using patterns like ReAct, RAG, and microservices to build reliable systems - The evolution of LLM capabilities, from expanding context windows to multimodal applications - Why most companies still prefer structured workflows over fully autonomous agents We also explore real-world case studies of production hurdles, including cascading failures, API misfires, and hallucination challenges. Alex shares concrete strategies for integrating LLMs into your pipelines while maintaining reliability and control. Whether you're scaling agents or building LLM-powered systems, this episode offers practical insights for navigating the complex landscape of LLMOps in 2025. LINKS The podcast livestream on YouTube (https://youtube.com/live/-8Gr9fVVX9g?feature=share) The LLMOps database (https://www.zenml.io/llmops-database) All blog posts about the database (https://www.zenml.io/category/llmops) Anthropic's Building effective agents essay (https://www.anthropic.com/research/building-effective-agents) Alex on LinkedIn (https://www.linkedin.com/in/strickvl/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Vanishing Gradients on YouTube (https://www.youtube.com/channel/UC_NafIo-Ku2loOLrzm45ABA) Vanishing Gradients on Twitter (https://x.com/vanishingdata) Vanishing Gradients on Lu.ma (https://lu.ma/calendar/cal-8ImWFDQ3IEIxNWk)
    --------  
    1:01:03
  • Episode 42: Learning, Teaching, and Building in the Age of AI
    In this episode of Vanishing Gradients, the tables turn as Hugo sits down with Alex Andorra, host of Learning Bayesian Statistics. Hugo shares his journey from mathematics to AI, reflecting on how Bayesian inference shapes his approach to data science, teaching, and building AI-powered applications. They dive into the realities of deploying LLM applications, overcoming “proof-of-concept purgatory,” and why first principles and iteration are critical for success in AI. Whether you’re an educator, software engineer, or data scientist, this episode offers valuable insights into the intersection of AI, product development, and real-world deployment. LINKS The podcast on YouTube (https://www.youtube.com/watch?v=BRIYytbqtP0) The original podcast episode (https://learnbayesstats.com/episode/122-learning-and-teaching-in-the-age-of-ai-hugo-bowne-anderson) Alex Andorra on LinkedIn (https://www.linkedin.com/in/alex-andorra/) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata) Hugo's "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/s/course/d56067f338)
    --------  
    1:20:03
  • Episode 41: Beyond Prompt Engineering: Can AI Learn to Set Its Own Goals?
    Hugo Bowne-Anderson hosts a panel discussion from the MLOps World and Generative AI Summit in Austin, exploring the long-term growth of AI by distinguishing real problem-solving from trend-based solutions. If you're navigating the evolving landscape of generative AI, productionizing models, or questioning the hype, this episode dives into the tough questions shaping the field. The panel features: - Ben Taylor (Jepson) (https://www.linkedin.com/in/jepsontaylor/) – CEO and Founder at VEOX Inc., with experience in AI exploration, genetic programming, and deep learning. - Joe Reis (https://www.linkedin.com/in/josephreis/) – Co-founder of Ternary Data and author of Fundamentals of Data Engineering. - Juan Sequeda (https://www.linkedin.com/in/juansequeda/) – Principal Scientist and Head of AI Lab at Data.World, known for his expertise in knowledge graphs and the semantic web. The discussion unpacks essential topics such as: - The shift from prompt engineering to goal engineering—letting AI iterate toward well-defined objectives. - Whether generative AI is having an electricity moment or more of a blockchain trajectory. - The combinatorial power of AI to explore new solutions, drawing parallels to AlphaZero redefining strategy games. - The POC-to-production gap and why AI projects stall. - Failure modes, hallucinations, and governance risks—and how to mitigate them. - The disconnect between executive optimism and employee workload. Hugo also mentions his upcoming workshop on escaping Proof-of-Concept Purgatory, which has evolved into a Maven course "Building LLM Applications for Data Scientists and Software Engineers" launching in January (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor). Vanishing Gradient listeners can get 25% off the course (use the code VG25), with $1,000 in Modal compute credits included. A huge thanks to Dave Scharbach and the Toronto Machine Learning Society for organizing the conference and to the audience for their thoughtful questions. As we head into the new year, this conversation offers a reality check amidst the growing AI agent hype. LINKS Hugo on twitter (https://x.com/hugobowne) Hugo on LinkedIn (https://www.linkedin.com/in/hugo-bowne-anderson-045939a5/) Vanishing Gradients on twitter (https://x.com/vanishingdata) "Building LLM Applications for Data Scientists and Software Engineers" course (https://maven.com/hugo-stefan/building-llm-apps-ds-and-swe-from-first-principles?utm_campaign=8123d0&utm_medium=partner&utm_source=instructor).
    --------  
    43:51
  • Episode 40: What Every LLM Developer Needs to Know About GPUs
    Hugo speaks with Charles Frye, Developer Advocate at Modal and someone who really knows GPUs inside and out. If you’re a data scientist, machine learning engineer, AI researcher, or just someone trying to make sense of hardware for LLMs and AI workflows, this episode is for you. Charles and Hugo dive into the practical side of GPUs—from running inference on large models, to fine-tuning and even training from scratch. They unpack the real pain points developers face, like figuring out: - How much VRAM you actually need. - Why memory—not compute—ends up being the bottleneck. - How to make quick, back-of-the-envelope calculations to size up hardware for your tasks. - And where things like fine-tuning, quantization, and retrieval-augmented generation (RAG) fit into the mix. One thing Hugo really appreciate is that Charles and the Modal team recently put together the GPU Glossary—a resource that breaks down GPU internals in a way that’s actually useful for developers. We reference it a few times throughout the episode, so check it out in the show notes below. 🔧 Charles also does a demo during the episode—some of it is visual, but we talk through the key points so you’ll still get value from the audio. If you’d like to see the demo in action, check out the livestream linked below. This is the "Building LLM Applications for Data Scientists and Software Engineers" course that Hugo is teaching with Stefan Krawczyk (ex-StitchFix) in January (https://maven.com/s/course/d56067f338). Charles is giving a guest lecture at on hardware for LLMs, and Modal is giving all students $1K worth of compute credits (use the code VG25 for $200 off). LINKS The livestream on YouTube (https://www.youtube.com/live/INryb8Hjk3c?si=0cbb0-Nxem1P987d) The GPU Glossary (https://modal.com/gpu-glossary) by the Modal team What We’ve Learned From A Year of Building with LLMs (https://applied-llms.org/) by Charles and friends Charles on twitter (https://x.com/charles_irl) Hugo on twitter (https://x.com/hugobowne) Vanishing Gradients on twitter (https://x.com/vanishingdata)
    --------  
    1:43:34

Plus de podcasts Technologies

À propos de Vanishing Gradients

A podcast about all things data, brought to you by data scientist Hugo Bowne-Anderson. It's time for more critical conversations about the challenges in our industry in order to build better compasses for the solution space! To this end, this podcast will consist of long-format conversations between Hugo and other people who work broadly in the data science, machine learning, and AI spaces. We'll dive deep into all the moving parts of the data world, so if you're new to the space, you'll have an opportunity to learn from the experts. And if you've been around for a while, you'll find out what's happening in many other parts of the data world.
Site web du podcast

Écoutez Vanishing Gradients, Underscore_ ou d'autres podcasts du monde entier - avec l'app de radio.fr

Obtenez l’app radio.fr
 gratuite

  • Ajout de radios et podcasts en favoris
  • Diffusion via Wi-Fi ou Bluetooth
  • Carplay & Android Auto compatibles
  • Et encore plus de fonctionnalités
Applications
Réseaux sociaux
v7.6.0 | © 2007-2025 radio.de GmbH
Generated: 2/5/2025 - 9:39:39 AM