Partner with us

Get your ticket

Call to action
Your text goes here. Insert your content, thoughts, or information in this space.
Button

Back to speakers

Srinivasa (Sunny)
Maringanti
Senior Machine Learning Engineer
T Mobile
Srinivasa (Sunny) is a Senior Machine Learning Engineer at T-Mobile with nine years of experience spanning software engineering and AI/ML. He specializes in building and scaling distributed systems, combining strong foundations in C/C++, Python, and Java with modern ML frameworks such as TensorFlow and PyTorch. His work focuses on deploying cloud-native AI solutions that improve system performance, reduce latency, and drive efficiency across telecom environments. With a dual background in Computer Engineering (MS) and an MBA, Sunny brings both technical depth and commercial awareness, enabling him to translate complex business requirements into robust, production-ready architectures. He has delivered solutions across messaging platforms, security systems, and cloud infrastructure for global telecom operators, while also mentoring teams and driving best practices in scalable AI development.
Button
20 May 2026 12:15 - 12:45
Platform engineering for ML teams: How to build internal developer platforms that let ML engineers move fast without breaking the data org
Most data infrastructure conversations start at the bottom of the stack. This one starts with the engineers who break it. ML teams move fast by default and when the platform doesn't give them safe abstractions, they build unsafe ones. The result isn't a people problem; it's a missing infrastructure layer. This session makes the architectural case for internal developer platforms designed specifically for ML workloads: what to expose, what to hide, and where the guardrails need to live in code rather than a wiki. We'll dig into the concrete decisions that separate platforms that scale from ones that become support tickets: compute isolation boundaries, data contract enforcement at the platform layer, and the abstraction patterns that let ML engineers self-serve without reaching past the guardrails into shared infrastructure. Key takeaways: →A framework for where platform ownership ends and ML engineer ownership begins and why most teams draw that line in the wrong place → The infrastructure primitives that matter most for ML workloads, and how to expose them without exposing everything underneath → What "paved roads" actually need to enforce to stop becoming optional