29 May 2025 09:45 - 10:30
Tackling hallucinations, drift & data decay at scale
LLMs don’t fail overnight, they degrade quietly.
From prompt fragility to distributional drift, maintaining reliability in production demands more than one-time testing.
This session explores how engineering teams are:
→ Detecting and mitigating drift across prompts, embeddings, and outputs
→ Designing feedback loops that surface hallucinations in real-world use
You'll leave with proven strategies to reduce failure rates, preserve model accuracy, and ensure your LLMs stay robust as data, users, and use cases evolve.