PyData Seattle 2023

Being well informed: Building a ML Model Observability pipeline
04-26, 13:30–15:00 (America/Los_Angeles), Hood

Model Observability is often neglected but plays a critical role in ML model lifecycle. Observability not only helps understand a ML model better, it removes uncertainty and speculation giving a deeper insight into some of the overlooked aspects during model development. It helps to answer the "why" narrative behind an observed outcome. In this tutorial, we will build a production quality Model Observability pipeline with open source python stack. ML engineers, Data scientists and Researchers can use this framework to further extend and develop a comprehensive Model Observability platform


The performance of a ML model can vary due to several factors. It is critically important to analyse and reason out the deviation in performance and take appropriate actions by triggering necessary alerts. Having a good observability helps to narrow down problem scope quickly and take appropriate actions like Model retraining, improve feature selection etc., In this tutorial we will dive deep into building an observability pipeline. We will walk through a use case for deploying a ML model, collecting critical information for observability, build Anomaly detection with the data and display it on a dashboard. We will use a few model interpretability tools to make a well informed dashboard; MSTL(Multiple Seasonal-Trend decomposition using LOESS), AutoARIMA, SHAP(SHapley Additive exPlanations), We will also use a few open source tools and distributed computing frameworks to build the pipeline which run on python stack.

You will learn how to apply these techniques in python on real-world problems to reason out behaviors. It will be mostly a live demo and codes and instructions will be shared as a GitHub repo and Instruction document. We will run a custom docker image which has pre-built codes and data sets. Please have docker daemon or docker desktop running on your laptop. Allocate at least 4 CPUs and 16 GB of RAM to the Docker Desktop instance. Github repo: https://github.com/anindya-saha/pydata-seattle.


Prior Knowledge Expected

No previous knowledge expected

Rajeev is a Senior software engineer at Lyft focused on building ML observability platform. Prior to Lyft, Rajeev has spent the last few years working on building ML platforms, enabling large scale distributed computing on k8s and building realtime ultra low latency systems.

Anindya Saha is a Staff Machine Learning Platform Engineer @Lyft, focusing on distributed computing solutions for machine learning and data engineering. He led and implemented the Spark on Kubernetes support on ml platform for feature engineering at scale with ephemeral Spark clusters on k8s. He is currently working on enabling scalable distributed model training on the ML platform.