04-28, 14:15–15:00 (America/Los_Angeles), St. Helens
Over the past few years, Explainable AI has become one of the most rapidly rising areas of research and tooling due to the increased proliferation of ML/AI models in critical systems. There are some methods that have emerged as clear favourites and are widely used in industry to get a sense of understanding of complex models. However, they are not perfect and often mislead practitioners with a false sense of security. In this talk, we look at the popular methods and illustrate when they fail, how they fail and why they fail.
Outline
Explainable AI has progressed significantly in recent years in terms of both research and tooling. Popular python packages such as SHAP and LIME are being used in industry and academia alike to understand the decision making of black box models. However, they often provide a false sense of assurance. Through this talk, we want to help users become aware of these pitfalls and also provide measures to overcome them while understanding the reasoning behind the failure cases. We will also provide a brief introduction of these explainable AI methods, and have code walkthroughs.
Central Thesis
The central purpose of the proposed talk is to allow attendees to get a 0-1 understanding of responsibly using explainable AI techniques in their use cases. Through a combination of hands on coding examples, necessary theoretic background and intuitive examples of cases where these tools falter, they will develop confidence, knowledge and intuition of using these methods in their own ML AI pipelines. We will show practical applications of explainable AI for model debugging and model understanding, while showing how users can be misled from these explanations if they are not careful.
Takeaways
At the end of the talk, attendees will have a good intuitive theoretical understanding of popular explainable AI techniques like LIME and SHAP. They will also see and use hands on coding examples of applying these techniques using their python packages. Further, they will observe and analyse the failure cases of these techniques and tools and the reasoning behind the failures. This will give them the necessary understanding, and the awareness of not using these tools directly off the shelf without understanding the nitty gritties. They will be able to use these tools responsibly and prevent them from failing silently.
No previous knowledge expected
I am currently finishing up my Masters degree at UC San Diego. I have worked with a great bunch of collaborators from UCSD, Stanford, IBM Research and Purdue University! Prior to this, I worked at American Express, AI Labs for two years as a Research Engineer. I completed my undergraduate studies in Computer Science from BITS Pilani in the beautiful state of Goa.