Learn more about AI and how to better leverage it.
This podcast aims to share exciting discussions with AI experts to demystify what they do and what they wor...
ChatGPT is completely changing how we learn programming. Instead of getting bogged down by coding theory, even beginners can jump right into building projects from day one.Quite the difference compared to university!With tools as simple as ChatGPT, you can experiment with building real applications right from the start quite easily without understanding much. This hands-on approach lets you learn by doing, offering instant feedback and a way to explore coding in a practical, exciting way.But there's a good and a wrong way to approach this.Relying solely on copy-pasting code won’t make you a programmer.When ChatGPT gives you a code snippet—say, a script that processes data or handles user login—use it as a starting point. TAKE THE TIME to UNDERSTAND why the code works, experiment with modifications, and see how changes affect the outcome. True mastery comes from engaging with the code, troubleshooting errors, and making it your own.If you can't explain anything, even if your app runs, it won't make you a better programmer or get you a good job. It will also have the downside of making a precarious app. You'll one day end up with too much code to follow what's happening, and ChatGPT will be stuck in an endless debugging loop.Yes, do embrace the power of AI to kickstart your projects, but just keep in mind that real growth (and value) happens when you do things and learn the logic behind every line.We've built a whole course about that principle to learn Python: https://academy.towardsai.net/courses/python-for-genai?ref=1f9b29
--------
16:33
How LLMs Will Impact Your Job (And How to Stay Ahead)
Here's an overview of the impact of LLMs on human work, which is complex and varied across different job categories...
--------
12:32
The Future of AI Development: The Need for LLM Developers
Software engineers vs. ML engineers vs. prompt engineers vs. LLM developers... all explained
The rise of LLMs isn’t just about technology; it’s also about people. To unlock their full potential, we need a workforce with new skills and roles. This includes LLM Developers, who bridge the gap between software development, machine learning engineering, and prompt engineering.
Let’s compare these roles...
Master, Use and Build with LLMs in this Programming Language Agnostic Course: https://academy.towardsai.net/courses/8-hour-genai-primer?ref=1f9b29
Master LLMs and Get Industry-ready Now: https://academy.towardsai.net/?ref=1f9b29
Our ebook: https://academy.towardsai.net/courses/buildingllmsforproduction?ref=1f9b29
Episode 2/6 of the "From Beginner to Advanced LLM Developer" course by Towards AI (linked above).
This course is specifically designed as a 1 day bootcamp for Software Professionals (language agnostic). It is an efficient introduction to the Generative AI field. We teach the core LLM skills and techniques together with practical tips. This will prepare you to either use LLMs via natural language or to explore documentation for LLM model platforms and frameworks in the programming language of your choice and start developing your own customised LLM projects.
--------
8:07
AI Agents vs. Workflows: How to Spot Hype from Real "Agents"?
What most people call agents aren’t agents. I’ve never really liked the term “agent”, until I saw this recent article by Anthropic, where I totally agree and now see how we can call something an agent. The vast majority is simply an API call to a language model. It’s this. A few lines of code and a prompt.
This cannot act independently, make decisions or do anything. It simply replies to your users. Still, we call them agents. But this isn’t what we need. We need real agents, but what is a real agent?
That what we dive in into this episode...
Links;
Anthropic’s blog on agents: https://www.anthropic.com/research/building-effective-agents
Anthropic’s computer use: https://www.anthropic.com/news/3-5-models-and-computer-use
Hamul Husain’s log on Devin: https://www.answer.ai/posts/2025-01-08-devin.html
--------
11:36
CAG vs RAG: Which One is Right for You?
In the early days of LLMs, context windows, which is what we send them as text, were small, often capped at just 4,000 tokens (or 3,000 words), making it impossible to load all relevant context.
This limitation gave rise to approaches like Retrieval-Augmented Generation (RAG) in 2023, which dynamically fetches the necessary context.
As LLMs evolved to support much larger context windows—up to 100k or even millions of tokens—new approaches like caching, or CAG, began to emerge, offering a true alternative to RAG...
►Full article and references: https://www.louisbouchard.ai/cag-vs-rag/
►Build Your First Scalable Product with LLMs: https://academy.towardsai.net/courses/beginner-to-advanced-llm-dev?ref=1f9b29
►Master LLMs and Get Industry-ready Now: https://academy.towardsai.net/?ref=1f9b29
►Our ebook: https://academy.towardsai.net/courses/buildingllmsforproduction?ref=1f9b29
►Twitter: https://twitter.com/Whats_AI
►My Newsletter (My AI updates and news clearly explained): https://louisbouchard.substack.com/
►Join Our AI Discord: https://discord.gg/learnaitogether
À propos de What's AI Podcast by Louis-François Bouchard
Learn more about AI and how to better leverage it.
This podcast aims to share exciting discussions with AI experts to demystify what they do and what they work on. We will cover specific AI-related topics (e.g., ChatGPT, DALLE...) and different roles related to artificial intelligence to share knowledge from the people who worked hard to gather it.
I also want to showcase these people's unique paths to get where they are as AI builders, experts, and users. From building to leveraging AI technologies.
Owner of the What's AI channel on YouTube, co-founder of Towards AI, and ex-PhD at Mila.