Neural Network Train-Validate-Test Stopping

I wrote an article titled “Neural Network Train-Validate-Test Stopping” in the May 2015 issue of Visual Studio Magazine. See


One way to think of a neural network is as a complex math function that has many numeric constants, called weights and biases. In order to make a useful neural network you must find the values of the weight and biases. This is called training the neural network.

Training is accomplished by using a set of so-called training data that has known input and correct output values. Training searches for values of weights and biases so that the NN’s computed output values closely match the known correct output values in the training data.

One of the main challenges of NN training is that because NNs are so complex, it’s possible to find values for the weights and biases so that the computed output values exactly match the target training output values. But then when the NN is presented with new, previously unseen data, the NN predicts very poorly. This phenomenon is called over-fitting.


There are several strategies to try and deal with over-fitting. One of them is train-validate-test stopping. The idea is to divide your data into three groups. Training data is used to find the values for weights and biases. During training, every now and then during the search, the current weights and biases are applied to the validation set. When error start to increase, over-fitting is starting to happen so you stop training. When finished, the test set is used to estimate the final accuracy of the model.

This entry was posted in Machine Learning. Bookmark the permalink.