First in our series on ML interpretability and going through Christoph Molnar's interpretability book.
When we're applying machine learning models, we often want to understand what is really going on in the world, we don't just want get a prediction.
Sometimes we want to get an intuitive understanding for how the overall model works. But often, we often want to explain an individual prediction: Maybe your application for a credit card was denied and you want to know why. Maybe you want to understand the uncertainty associated with your prediction. Maybe you're going to take a real-world decision based on your model.
That's where Shapely values come in!
With Connor Tann and Dr. Tim Scarfe
References:
Whimsical canvas we were using:
whimsical.com/12th-march-chri...
We were using Christoph's book as a guide:
christophm.github.io/interpre...
christophm.github.io/interpre...
christophm.github.io/interpre...
SHAPLEY VALUES
Shapley, Lloyd S. "A value for n-person games." Contributions to the Theory of Games 2.28 (1953): 307-317.
www.rand.org/content/dam/rand...
SHAP
Lundberg, Scott M., and Su-In Lee. "A unified approach to interpreting model predictions." Advances in Neural Information Processing Systems. 2017.
papers.nips.cc/paper/2017/has...
LIME
Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should I trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM (2016).
arxiv.org/abs/1602.04938
Негізгі бет ML Interpretability: SHAP/LIME
Пікірлер: 25