How Do ML Models Use Their Features to Make Predictions (or SHAP Values for ML Explainability)

SHAP opens up the ML black box by providing feature attributions for every prediction of every model. Being a relatively new method ([masked]) , SHAP is gaining popularity extremely quickly thanks to its user-friendly API and theoretical guarantees. In this talk I will guide your intuition through the exciting theory SHAP is based on, and demonstrate how SHAP values can be aggregated to understand model behavior. Throughout the talk I will present real-life examples for using SHAP in the frau
Back to Top