I wrote an article titled “Parameter Sweeps, or How I Took My Neural Network for a Test Drive” in the November 2015 issue of Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2015/11/01/parameter-sweeps.aspx.
It’s a bit difficult to explain what a neural network parameter sweep is, not because the idea is conceptually difficult, but rather because there are several interrelated ideas involved.
One way to think of a neural network is as a complicated math function that can make predictions. A neural network accepts numeric input values and emits numeric output values.
The neural network output values are calculated using a set of numeric constants called weights and biases.
The values of a neural network’s weights are determined by using a set of training data that has known input values and known, correct output values. Different values for the weights are tried in order to find a set of weight values that produce calculated output values that are very close to the known correct output values in the training data.
The process of finding a neural network’s weight values is called training the network. As it turns out, the neural network weight values that are produced by training are often very sensitive to the values of the training parameters.
The process of trying different training parameter values in order to find a good set of neural network weight values, is called a parameter sweep.