-
Key Lessons in Building Production-Grade AI Agents
These key takeaways are distilled from an in-depth Productbulb Podcast interview with expert AI engineer Rajaswa Patil, who has been a key contributor to major projects like GitHub Copilot and Postman’s AI assistant, Postbot. This document serves as a practical guide for aspiring AI developers, offering insights into the real-world challenges and solutions involved in…
-
The Product Bulb Podcast Ep. 07:
Lessons from Building AI Copilots – Secrets to Scaling AI Agents | with Rajaswa Patil Here’s a power packed episode – listen from a foundational AI engineer on scaling AI Agents. The world is captivated by the seemingly magical capabilities of large language models. But behind the curtain of every seamless AI assistant lies a…
-
The Product Bulb Podcast Ep. 06: Production Generative AI – Practical Use Cases and Challenges | with Apurva Misra
Join us on this Product Bulb Podcast for a deep dive into the practical applications and challenges of production generative AI with expert AI consultant and machine learning engineer, Apurva Misra. Learn about real-world use cases like customer Q&A systems that reduced human support queries by 21% and innovative applications like allergen detection. Aura shares…
-
Understanding Production RAG Systems (Retrieval Augmented Generation)
1. What is RAG ? Retrieval Augmented Generation (RAG), is a method where you have a foundation model, and you have a library of personal documents – this can be unstructured data in any format. Now your goal is for answering some questions from your persona library of docs, with the help of LLM. Enter…
-
What’s LLM Observability ? Latest tools to look out for
2024 is looking to be the year where a lot of applied Large Language Models (LLMs) from enterprise companies, other than the creators of the foundation LLMs, are going to come out of the Proof of Concept (POC) phase to actually being used by their customers. It’s gonna be a year of trial and error,…