I wrote an article titled “How To Use Resilient Back Propagation To Train Neural Networks” in the March 2015 issue of Visual Studio Magazine. See http://visualstudiomagazine.com/articles/2015/03/01/resilient-back-propagation.aspx.
A neural network (NN) is a software system that makes predictions based on data. For example a NN can predict the winner of a basketball game based on data such as each team’s winning percentage, the average winning percentage of each team’s opponents, and so on.
In essence, a NN is a complicated math function with many numeric constants called weights. Training a NN is the process of using historical data, with known input and output values, to find the values for the NN weights so that the computed output values closely match the known output values in the training data.
There are several algorithms that can be used to train a NN. Each algorithm has pros and cons. The most common technique is called back-propagation. Back-propagation is easy to implement and is usually very fast compared to other training techniques. However, back-propagation requires the user to supply two parameter values, for the “learning rate” and the “momentum factor”. Back-propagation is extremely sensitive to the learning rate and momentum factor values. For example, when the learning rate is 0.05 and the momentum is 0.01 you might get a great NN, but for a learning rate of 0.04 and a momentum of 0.02 you might get a terrible NN.
Resilient back-propagation, Rprop, is a fascinating variation of regular back-propagation. It does not require the user to specify a learning rate and a momentum factor. In some of my experiments, Rprop works incredibly well.
I don’t see Rprop used very much. I suspect this is in part due to the fact that Rprop is much harder to implement than regular back-propagation.