Chloe McAree (McAteer)
Published on

What-If I could explain my ML models better?

Authors

When presenting AI solutions, the machine learning model can seem like a black box and sometimes people will want to understand more about what is going on behind the scenes. How can we find a way to break down and explain the model without going too far into model interpretation?

With this question in mind I wanted to look into tools that could potentially make machine learning processes more transparent by being able to explain things like; how interactions between features in datasets have effects on a model’s overall predictions and show how certain things affect the accuracy of the prediction that it makes.

Introducing the What-If-Tool 🔎

When looking at ways to better understand whats happening in a model I came across the What-If-Tool.

The What-If-Tool is a user interface that has been created by Google to allow people to visually interrogate machine learning models. It can run in TensorBoard, Jupyter and Colaboratory notebooks and can work on both classification and regression models.

Throughout this blog I am going to take you through what this tool can offer and the different use cases I feel it has.

Viewing Features 👀

Within the What-If-Tool, there is a features tab where you are able to get a great insight into both the numerical and categorial data that your model is working with.

This can give an understanding of the distribution of values in a dataset. It displays important information like the value count and percentage of missing values. For numerical data the tool can shows things like the mean, standard deviation, min and max and for categorical data you can view the number of unique values per column, top values and frequency of top values.

Being able to have this breakdown of information is very handy as it can flag up unexpected values and clearly highlights obvious outliers in the data in red.

Datapoint Editor 📍

There is a datapoint editor and this is extremely useful for data analysis.

Inside the datapoint editor tab you will see all of the points being displayed in a graph on the right panel.

You are then able to dive into any particular data point and view all of the corresponding features for that data point, which are displayed on the left-hand side panel.

From here, you have the opportunity to change any of the feature values and run inference and see how that change effects the models performance. Being able to tweak these values and show someone how this can overall affect the results is a truly amazing way to show how the ML model is working and how it adapts depending on the data it receives.

To get a further understanding of the model, we can also dive one step deeper and take a closer look to view what small set of changes can cause the model to alter its decision completely. This is done by viewing a data points “counterfactual” — which is the most similar value of the other class. Being able to highlight which data point from the opposite class, which has the most similar values to the one you have selected, is extremely beneficial because it highlights which features have the greatest impact on the model’s predictions.

Measuring Performance 📏

Within the “Performance & Fairness” tab, the Confusion Matrix for the model is displayed showing the number of true positives/negatives and the number of false positives/negatives to describe the performance of the model. The What-If-Tool also provides a slider to adjust the classification threshold to see how it effects the confusion matrix values and there is also an option to change the cost ratio as well, to see the cost of false positives relative to false negatives.

Being able to experiment with these values and view the outcomes instantly is very useful for the developer to be able to fully analyse the models performance.

Fairness 👩‍⚖

We want to make sure the model that we have created is fair. Any bias in the training data will be reflected in the trained model. The What-If-Tool can allow us to check how fair the model is by choosing to slice it by one of the features, to show the performance for each value of that selected feature.

For example in the image below, I have chosen to split by gender and can see that the model is actually more accurate when predicting females than males.

Overall findings

Benefits to non-technical people:

I think the What-If-Tool is extremely beneficial for model explainability to non-technical people, which could include a wide range of stakeholders as you can show them the data thats been used, you can demonstrate how changes to data points can impact the output and finally you can give them a breakdown of the overall models performance visually.

Benefits to developer:

I feel that this tool has a lot of value for developers as well, it allows us to visually compare multiple different models, it gives us a way to easily experiment with confusion matrices, it makes it quick and easy to try things without having to write a lot of code and it can overall just be a good sanity check for the model that we have created.

The features tab on the What-If-Tool I find particularly useful, it gives a full break down of the data that we are working with and can quickly and easily flag up any outliers in the dataset.

Drawbacks:

The one drawback I found when using this tool was that most of the documentation and tutorials for getting it set up in a notebook are written for using TensorFlow models — however if is possible to use the tool for any python accessible models, it just requires a little bit more of a work around. An example of using a non-TensorFlow model can be found here.

Overall I think this is a very useful tool and would recommend for everyone to check it out, there are loads of useful examples of its implementation on the official What-If-Tool Documentation.