In the BC (Before- ChatGPT!) era, MLOps primarily encompassed the processes related to model serving, inference and monitoring. During that time, considerable effort was dedicated to data collection and model training, with MLOps serving as a means to operationalize and manage those models effectively. However, the landscape has shifted post-BC and the traditional practices of model training and fine-tuning have become less common due to extended context window size. While all of these core processes still remains crucial, in this talk we deep dive into emerging dynamics of operatizaling LLMs at scale in addition to invaluable insights gleaned from the real-world industrial applications.