The other day I was working with binary classification, that is, predicting data which can be either 0 or 1. Most mathbased classification models will not predict 0 or 1, rather they’ll predict a value between 0.0 and 1.0. Two common ways to determine the accuracy of a prediction model are to compute the mean squared error (where smaller values are better and 0.0 means perfect prediction) and to compute the predictive accuracy (the percentage of correct predictions). It’s quite possible for one model to have better mean squared error than a second model, but have a worse predictive accuracy than the second model. For example in the image below, I am looking at two models, A and B. The actual Y values in some data are { 1, 0, 1, 0, 1 }. Model A predicts { 0.6, 0.4, 0.6, 0.4, 0.1 }. Model B predicts { 0.9, 0.1, 0.9, 0.6, 0.4 }. Model A has a mean squared error of 0.29 while model B has mean squared error of 0.15 indicating that model B is better. For predictive accuracy I’m using the rule that if the prediction value is less than 0.5 we figure the prediction is 0 and if the prediction value is greater than 0.5 the prediction is 1. So model A predictions are { correct, correct, correct, correct, wrong }. Model B predictions are { correct, correct, correct, wrong, wrong }. Model A correctly predicts 0.80 of the data while model B correctly predicts 0.60 indicating that model A is better. In other words according to mean square error model B is better but according to predictive accuracy model A is better. The moral is that mean squared error and predictive accuracy do not always agree when it comes to identifying an optimal prediction model.
Books (By Me!)
Events (I Speak At!)
 2017 Big Data Innovation Summit
 2017 Interop Conference
 2017 Microsoft Mega Event
 2017 Visual Studio Live Las Vegas
 2016 IoT Evolution Conference
 2016 G2E Conference
 2016 SAS Analytics Conference
 2016 Machine Learning Conference
 2016 IT/Dev Connections
 2016 Microsoft TechReady22
 2016 DevIntersection

Recent Posts
Archives
Categories