Mean Squared Error versus Predictive Accuracy

The other day I was working with binary classification, that is, predicting data which can be either 0 or 1. Most math-based classification models will not predict 0 or 1, rather they’ll predict a value between 0.0 and 1.0. Two common ways to determine the accuracy of a prediction model are to compute the mean squared error (where smaller values are better and 0.0 means perfect prediction) and to compute the predictive accuracy (the percentage of correct predictions). It’s quite possible for one model to have better mean squared error than a second model, but have a worse predictive accuracy than the second model. For example in the image below, I am looking at two models, A and B. The actual Y values in some data are { 1, 0, 1, 0, 1 }. Model A predicts { 0.6, 0.4, 0.6, 0.4, 0.1 }. Model B predicts { 0.9, 0.1, 0.9, 0.6, 0.4 }. Model A has a mean squared error of 0.29 while model B has mean squared error of 0.15 indicating that model B is better. For predictive accuracy I’m using the rule that if the prediction value is less than 0.5 we figure the prediction is 0 and if the prediction value is greater than 0.5 the prediction is 1. So model A predictions are { correct, correct, correct, correct, wrong }. Model B predictions are { correct, correct, correct, wrong, wrong }. Model A correctly predicts 0.80 of the data while model B correctly predicts 0.60 indicating that model A is better. In other words according to mean square error model B is better but according to predictive accuracy model A is better. The moral is that mean squared error and predictive accuracy do not always agree when it comes to identifying an optimal prediction model.

This entry was posted in Machine Learning, Software Test Automation. Bookmark the permalink.