MLOps Beyond LLMs

Companies & organizations know they shouldn’t build for Google but they also don’t know how NOT to build for Google scale. The MLOps tooling ecosystem is fragmented and companies that are just starting on their journeys to becoming ML-native or ML-fluent are confused by the ML Ops maturity models that don't account for their particular organizational goals or trajectory, especially if they're not "on the road" to Google maturity. Toss in the the emergence and (seemingly widespread) adoption of LLM’s and companies (and teams) are lost and looking for clarity on: - How can existing ML platforms be extended to account for new uses cases involving LLMs? - Does the team composition change? Do we now need to start hiring “prompt engineers”? - Should we stop existing initiatives? Do we need to pivot? My goal in this session is to help cut through the noise and cover: - What are the main problems MLOps tries to solve? - What does the archetypal MLOps platform look like? - What are the most common components of an MLOps platform? - Where do LLM based applications fit in?