[introductory/intermediate] Explainable Machine Learning
Machine learning (ML) methods have been remarkably successful for a wide range of application areas in the extraction of essential information from data. An exciting and relatively recent development is the uptake of ML in the natural sciences, where the major goal is to obtain novel scientific insights and discoveries from observational or simulated data that make sense to the specialized practitioner. In a different vein, the increasing pervasiveness of applied machine learning has drawn attention to aspects of safety and ethics in the ML decision-making process – is the practitioner confident that the ML decisions will be safe for the user, and that the decisions made do not have any ethical shortfalls? In this short course, we review explainable machine learning in view of these aspects and discuss three core elements that we identified as broadly relevant: transparency, interpretability, and explainability. With respect to these core elements, we provide a survey of recent scientific works that incorporate machine learning and the way that explainable machine learning is used in combination with domain knowledge and application requirements.
- Introduction and Motivation: ML and Science, Safety, and Ethics
- Terminology and Definitions
- Explainability: ML Outputs
- Explainability: ML Model Structure and Design
- Explainability: ML Model Parameters
- F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” 2017, arXiv:1702.08608.
- R. Guidotti, A. Monreale, S. Ruggieri, F. Turini, F. Giannotti, and D. Pedreschi, “A survey of methods for explaining black box models,” ACM Comput. Surv., vol. 51, no. 5, pp. 1–42, Aug. 2018.
- Z. C. Lipton, “The mythos of model interpretability,” Commun. ACM, vol. 61, no. 10, pp. 36–43, Sep. 2018.
- G. Montavon, W. Samek, and K.-R. Müller, “Methods for interpreting and understanding deep neural networks,” Digit. Signal Process., vol. 73, pp. 1–15, Feb. 2018.
- W. James Murdoch, C. Singh, K. Kumbier, R. Abbasi-Asl, and B. Yu, “Definitions, methods, and applications in interpretable machine learning,” Proc. Nat. Acad. Sci., vol. 116, no. 44, pp. 22071-22080, 2019.
Familiarity with basic machine learning problems (detection, classification, estimation) and approaches (hypothesis testing, support vector machines, neural networks).
Marco F. Duarte received the B.Sc. degree (Hons.) in computer engineering and the M.Sc. degree in electrical engineering from the University of Wisconsin-Madison, Madison, WI, USA, in 2002 and 2004, respectively, and the Ph.D. degree in electrical and computer engineering from Rice University, Houston, TX, USA, in 2009.
He was an NSF/IPAM Mathematical Sciences Postdoctoral Research Fellow of the Program of Applied and Computational Mathematics, Princeton University, Princeton, NJ, USA, from 2009 to 2010, and the Department of Computer Science, Duke University, Durham, NC, USA, from 2010 to 2011. He is currently an Associate Professor with the Department of Electrical and Computer Engineering, University of Massachusetts Amherst, MA, USA. His research interests include machine learning, compressed sensing, sensor networks, and computational imaging.
Dr. Duarte received the Presidential Fellowship and the Texas Instruments Distinguished Fellowship, in 2004, and the Hershel M. Rich Invention Award, in 2007, from Rice University. He was a recipient of the IEEE Signal Processing Society Overview Paper Award (with Y. C. Eldar), in 2017, and the IEEE Signal Processing Magazine Best Paper Award (with M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk) in 2020. He is also an Associate Editor of the IEEE Transactions on Signal Processing.