Introducing Chain of Thought, the podcast for software engineers and leaders that demystifies artificial intelligence.
Join us each week as we tell the storie...
Using AI to Modernize Your Legacy Applications | MongoDB’s Rachelle Palmer
Imagine cutting your legacy code modernization timeline from years to months. It’s no longer science fiction and this week’s guest is here to tell us how. Rachelle Palmer, Director of Product Management at MongoDB, joins hosts Conor Bronsdon and Atindriyo Sanyal, for a discussion on the groundbreaking ways AI is modernizing legacy applications. At MongoDB, Rachelle's forward-deployed AI engineering team is tackling the challenge of transforming complex, outdated codebases, freeing developers from technical debt. She details how LLMs are automating tasks like improving documentation, test generation, and even business logic conversion, dramatically reducing modernization timelines from years to months. What once demanded teams of dozens can now be achieved with a small, highly efficient team.Chapters:00:00 Introduction and Host Welcome00:58 Challenges in Modernizing Legacy Applications02:52 Real-World Examples of Code Modernization04:00 The Role of LLMs in Code Modernization08:01 Measuring Success in AI-Powered Modernization12:28 The Future of AI in Engineering16:17 Evaluating Modernization Success21:12 Returning to Your Startup Roots29:07 Forward Deployed AI Engineers35:36 Importance of Academic Research in AI42:10 Conclusion and FarewellFollow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Today's Guest(s)Rachelle PalmerMongoDBApplication Modernization FactoryCheck out GalileoTry Galileo
--------
43:37
Can Your AI Strategy Be Future-Proof? | Galileo’s Vikram Chatterji
This week, we're sharing a special episode courtesy of 'Dev Interrupted.' Our co-host, Galileo CEO Vikram Chatterji, recently joined theDev Interrupted team for an engaging discussion on AI strategy. We were so impressed by the conversation that we wanted to share it with our audience, and they were kind enough to let us. We hope you enjoy it!From Dev Interrupted:"Vikram Chatterji joins Dev Interrupted’s Andrew Zigler to discuss how engineering leaders can future-proof their AI strategy and navigate an emerging dilemma: the pressure to adopt AI to stay competitive, while justifying AI spending and avoiding risky investments.To accomplish this, Vikram emphasizes the importance of establishing clear evaluation frameworks, prioritizing AI use cases based on business needs and understanding your company's unique cultural context when deploying AI."Chapters:00:00 Introduction and Special Announcement01:14 Welcome to Dev Interrupted01:42 Challenges in AI Adoption03:16 Balancing Business Needs and AI06:15 Crawl, Walk, Run Approach10:52 Building Trust and Prototyping13:07 AI Agents as Smart Routers13:50 Galileo's Role in AI Development16:25 Evaluating AI Systems25:36 Skills for Engineering Leaders27:35 Conclusion Follow the hostsFollow AtinFollow ConorFollow VikramFollow YashFollow Dev InterruptedPodcastSubstackLinkedInFollow Dev Interrupted HostsAndrewBenCheck out GalileoTry Galileo
--------
29:16
The Making of Gemini 2.0: DeepMind's Approach to AI Development and Deployment | Logan Kilpatrick
Google’s strength in AI has often seemed to get lost in the midst of OpenAI announcements or DeepSeek fervor - yet Gemini 2.0 is more than good for many tasks; it’s the model to beat - and we have the research to back it up. This week, Logan Kilpatrick, senior product manager at Google DeepMind, joins us to discuss Gemini’s creation story, its emergence as the premiere model in the AI race, and why the launch of Gemini 2.0 is great news for developers.During the conversation Conor and Logan explore the exciting world of multimodal AI, Gemini's strengths in agentic use cases, and its unique approach to function calling, compositional function calling, and the seamless integration of tools like search and code execution.They also chat about Logan’s vision for a future where AI interacts with the world more naturally, offering a view of the potential of vision-first AI agents, and why Google's hardware advantage is enabling Gemini's impressive performance and long context capabilities. Follow along with the discussion using Galileo’s AI Agent Leaderboard:https://huggingface.co/spaces/galileo-ai/agent-leaderboardChapters:00:00 DeepMind's Role in Gemini's Development03:49 Gemini 2.0 Updates and Developer Highlights06:08 Agentic Use Cases and Function Calling11:29 Multimodal Capabilities16:15 Putting AI in Production21:06 Gemini's Differentiation and Hardware31:22 Future Vision for Gemini and G Suite Integration35:23 Gemini for Developers39:02 Conclusion and FarewellFollow the hostsFollowAtinFollowConorFollowVikramFollowYashFollow LoganTwitter:@OfficialLoganKLinkedIn:https://www.linkedin.com/in/logankilpatrick/Show NotesTry Gemini for yourself:gemini.google.comGemini for Developers:aistudio.google.comCheck out GalileoTry Galileo
--------
40:32
DeepSeek Fallout, Export Controls & Agentic Evals
This week, hosts Conor Bronsdon and Atindriyo Sanyal discuss the fallout from DeepSeek's groundbreaking R1 model, its impact on the open-source AI landscape, and how its release will impact model development moving forward. They also discuss what effect (if any) export controls have had on AI innovation and whether we’re witnessing the rise of “Agents as a Service”.
To tackle the increasing complexity of agentic systems, Conor and Atin highlight the need for robust evaluation frameworks, discussing the challenges of measuring agent performance, and how the recent launch of Galileo's agentic evaluations are empowering developers to build safer and more effective AI agents.
Chapters:
00:00 Introduction
02:09 DeepSeek's Impact and Innovations
03:43 Open Source AI and Industry Implications
13:44 Export Controls and Global AI Competition
18:55 Software as a Service
19:29 Agentic Evaluations
25:14 Metrics for Success
31:34 Conclusion and Farewell
Follow the hosts
Follow Atin
Follow Conor
Follow Vikram
Follow Yash
Check out Galileo
Try Galileo
Show Notes
On DeepSeek and Export Controls
Introducing Agentic Evaluations
--------
32:41
AI, Open Source & Developer Safety | Block’s Rizel Scarlett
As DeepSeek so aptly demonstrated, AI doesn’t need to be closed source to be successful.
This week, Rizel Scarlett, a Staff Developer Advocate at Block, joins Conor Bronsdon to discuss the intersections between AI, open source, and developer advocacy. Rizel shares her journey into the world of AI, her passion for empowering developers, and her work on Block's new AI initiative, Goose, an on-machine developer agent designed to automate engineering tasks and enhance productivity.
Conor and Rizel also explore how AI can enable psychological safety, especially for junior developers. Building on this theme of community, they also dive into topics such as responsible AI development, ethical considerations in AI, and the impact of community involvement when building open source developer tools.
Chapters:
00:00 Rizel's Role at Block
02:41 Introducing Goose: Block's AI Agent
06:30 Psychological Safety and AI for Developers
11:24 AI Tools and Team Dynamics
17:28 Open Source AI and Community Involvement
25:29 Future of AI in Developer Communities
27:47 Responsible and Ethical Use of AI
31:34 Conclusion
Follow
Conor Bronsdon: https://www.linkedin.com/in/conorbronsdon/
Rizel Scarlett
LinkedIn: https://www.linkedin.com/in/rizel-bobb-semple/
Website: https://blackgirlbytes.dev/
Show Notes
Learn more about Goose: https://block.github.io/goose/
Introducing Chain of Thought, the podcast for software engineers and leaders that demystifies artificial intelligence.
Join us each week as we tell the stories of the people building the AI revolution, unravel actionable strategies and share practical techniques for building effective GenerativeAI applications.