Wednesday, June 13, 2018

The Confusion Matrix, Cost-Benefit Analysis, and Decision Making

The confusion matrix is aptly named.  Even though I have studied and used it many times, I still get confused and turned about in trying to remember the meaning of sensitivity vs. specificity or false negatives vs. false positives.  Despite its confusing nature, understanding what the confusion matrix means and being able to correctly use it is not important only for those in data related fields.  If we abstract the confusion matrix and understand it as being a way to assess trade-offs or cost vs. benefits, then we can see that this skill or way of thinking is important for everyone and can help us make better decisions in our daily lives.

The confusion matrix is typically presented as a four box table comparing predictions vs. actual values for a model.  On the rows, we have the predictions for a class (positive on top, negative on the bottom).  On the columns, we have the actuals for a class (positive on the left, negative on the right).  Consequently, when our model predicts a positive accurately, this is a true positive (top left corner).  Similarly, when our model predicts a negative correctly, this is a true negative (bottom right corner).  However, when our model predicts a positive but this is actually a negative, this is called a false positive (the model falsely predicted positive).  Similarly, when our model predicts a negative but this is actually a positive, this is called a false negative (the model falsely predicted a negative).  See here if you are still confused.

Ideally, we want our predictive models to correctly predict positives when the actuals are positive and negatives when the actuals are negative, and so there would be no false positives or false negatives.  But this isn't reality.  Reality is this: (1) our models, even our best models, get it wrong sometimes (they produce false positives and false negatives); (2) we have competing models that make different predictions and get different right and wrong predictions.

This presents us with a question: in the face of multiple models, which model do we choose and why?  Enter in costs and benefits associated with each kind of correct prediction and incorrect prediction.  But before we can even do that, we have to really understand what problem we are trying to solve.  What are we trying to do?  What is the model supposed to help us do?  What in the end are we trying to optimize?  Once we have a good understanding of this, then we can assign costs and benefits to each kind of prediction and outcome, and then choose the model that offers the best benefits with the fewest costs.

For example, suppose we have medical test A and medical test B for detecting a certain disease.  Perhaps medical test A has more true positives and more true negatives than medical test B.  Then it seems we ought to choose medical test A.  However, suppose that medical test A has way more false negatives than medical test B, and a false negative leads to death of the individual (the disease is present but was missed by the test and immediately led to the death of the individual since it was not treated).  Then maybe medical test B is better.  Suppose the medical test B is prohibitively expensive while medical test A is relatively cheap.  Then maybe we are back to choosing medical test A as the preferred option all things considered.

The point here is that we have to be clear about what we are trying to optimize, and there will be tradeoffs (e.g., cost vs. life-saving) that we will have to assess in order to optimize.  And people may disagree as to how to correctly assign costs and benefits or in what ought to be optimized.  In such a case, some sort of compromise will likely be required (e.g., saving costs while also saving lives).  Still, the very act of thinking critically about what we are trying to optimize, and what the relevant costs and benefits are, is a useful skill that needs to be developed, and is what is required in making effective decisions in all parts of life.  A confusion matrix forces us to think in this way and to develop this skill.

While the confusion matrix can perhaps tell us which model is statistically best, it cannot tell us which model is best from a cost-benefit analysis in the given business, medical, or other context.  We have to supply the criteria for what is a cost and what is a benefit.  But thankfully, once we are clear on that, we can figure out which model to use in order to optimize the cost vs. benefit tradeoff.  And once that model has been chosen, we can confidently use it to guide our decisions on what to do based on its predictions.  We will also have benefited from being forced to deeply understand the nature of the problem we are trying to solve.  This exercise makes us better thinkers and better decision makers, and hopefully, makes the world a little less confusing to us and others.

No comments:

Post a Comment