New Book Review: "An Introduction to Machine Learning Interpretability"

New book review for An Introduction to Machine Learning Interpretability, by Patrick Hall and Navdeep Gill, O'Reilly, 2018, reposted here:

An_introduction_to_machine_learning_interpretability

Stars-4-0._V192240704_

Copy provided by Dataiku.

The content of this freely made book provides an introduction to the perceived need for machine learning interpretability, and how satisfaction might be fulfilled by practical methods. As the authors explain in their introduction, trusting models and their results is a hallmark of good science. "The inherent trade-off between accuracy and interpretability in predictive modeling can be a particularly vexing catch-22 for analysts and data scientists working in regulated industries. Due to strenuous regulatory and documentation requirements, data science professionals in the regulated verticals of banking, insurance, healthcare, and other industries often feel locked into using traditional, linear modeling techniques to create their predictive models. So, how can you use machine learning to improve the accuracy of your predictive models and increase the value they provide to your organization while still retaining some degree of interpretability?"

"This report provides some answers to this question by introducing interpretable machine learning techniques, algorithms, and models. It discusses predictive modeling and machine learning from an applied perspective and puts forward social and commercial motivations for interpretability, fairness, accountability, and transparency in machine learning. It defines interpretability, examines some of the major theoretical difficulties in the burgeoning field, and provides a taxonomy for classifying and describing interpretable machine learning techniques. We then discuss many credible and practical machine learning interpretability techniques, consider testing of these interpretability techniques themselves, and, finally, we present a set of open source code examples for interpretability techniques."

This terse 40-page work is broken down into the following sections: "Machine Learning and Predictive Modeling in Practice", "Social and Commercial Motivations for Machine Learning Interpretability", "The Multiplicity of Good Models and Local Locality", "Accurate Models with Approximate Explanations", "Defining Interpretability", "A Machine Learning Interpretability Taxonomy for Applied Practitioners", "Common Interpretability Techniques", "Testing Interpretability", and "Machine Learning Interpretability in Action". Additionally, links to the 15 papers that fed the content of this book are provided in the concluding "References" section, such as "The Evolution of Analytics: Opportunities and Challenges for Machine Learning in Business", "Why should I trust you?: Explaining the predictions of any classifier", "Towards a rigorous science of interpretable machine learning", "The Promise and Peril of Human Evaluation for Model Interpretability", and "Ideas on interpreting machine learning".

The bulk of the content rests in two sections: "A Machine Learning Interpretability Taxonomy for Applied Practitioners" and "Common Interpretability Techniques". This first section provides a summary of the previously defined taxonomy in the last paper listed above. As the authors explain, the complexity of a machine learning model is directly related to its interpretability: "Generally, the more complex the model, the more difficult it is to interpret and explain. The number of weights or rules in a model—or its Vapnik–Chervonenkis dimension, a more formal measure—are good ways to quantify a model’s complexity. However, analyzing the functional form of a model is particularly useful for commercial applications such as credit scoring." Such functional forms can be broken down into various degrees of interpretability: (1) linear, monotonic functions provide high interpretability, (2) nonlinear, monotonic functions provide medium interpretability, and (3) nonlinear, nonmonotonic functions provide low interpretability.

This second section discusses what the authors describe as credible techniques for training interpretable models and gaining insights into model behavior and mechanisms, in the context of the aforementioned taxonomy. While many of these techniques have existed for years, many others have only been put forward relatively recently in a flurry of research: "The section begins by discussing data visualization approaches because having a strong understanding of a dataset is a first step toward validating, explaining, and trusting models. We then present white-box modeling techniques, or models with directly transparent inner workings, followed by techniques that can generate explanations for the most complex types of predictive models such as model visualizations, reason codes, and variable importance measures. We conclude the section by discussing approaches for testing machine learning models for fairness, stability, and trustworthiness."

While this book is relatively short, this section provides dozens of links across the "References" and "OSS" (open source software) portions of the discussed techniques, including links to two additionally freely available books, "The Elements of Statistical Learning (Second Edition)" and "Introduction to Data Mining (Second Edition)", as well as additional papers interestingly not included in the aforementioned "References" section at the end of this book, such as "Interpreting Blackbox Models via Model Extraction", "Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Explanation", and "A Unified Approach to Interpreting Model Predictions".

Some of these papers might not be seen as practical by some potential readers, who will instead be interested in the links provided for several GitHub repositories, including one specifically associated with this book, although I was disappointed to find that at least two of these repositories are associated with Oracle or IBM. However, apart from these repositories are links to several Jupyter notebooks that some will likely find of value, such as "Engineering Transparency into Your Machine Learning Model with Python and XGBoost", "Increase Transparency and Accountability in Your Machine Learning Project with Python and H2O", "Testing machine learning models for accuracy, trustworthiness, and stability with Python and H2O", and "Explain Your Predictive Models to Business Stakeholders using LIME with Python and H2O". Unfortunately, I discovered that one link provided by the authors points directly to a tarball file, but a decent list of follow-up reads nonetheless.

Subscribe to Erik on Software

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe