29 May 2025 09:00 - 09:30
Data-driven LLMOps: Ensuring quality, performance, and continuous improvement in generative AI
The success of Generative AI applications depends on the quality of data and the continuous monitoring of model performance. This session will focus on the critical role of LLMOps in managing data pipelines, ensuring data integrity, and implementing effective monitoring strategies for Large Language Models (LLMs).
Real-world examples, such as customer feedback analysis, will be highlighted to explore advanced techniques in data processing and analysis.
These include leveraging Retrieval-Augmented Generation (RAG) and Cache-Augmented Generation (CAG)workflows with tailored filters, utilizing visual analytics for exploratory data analysis, and tracking
functional measurements through Function Calling Agents to evaluate model performance.
The session will also cover the importance of establishing Human-in-the-Loop (HITL) processes to
ensure continuous validation and improvement.
Attendees will gain actionable insights into :
→ building a data-driven LLMOps framework, empowering their generative AI initiatives to deliver sustained
value through continuous improvement.