Neural Network Resilient Back-Propagation

Resilient back-propagation (RPROP) is a neural network training algorithm — you present a neural network with training data that has known, correct output values (for a given set of input values) and then RPROP finds the value of the network’s weights and biases. Then you can use the trained model to make predictions for new, previously unseen input data values.

The most common neural network training algorithm is back-propagation, which however, has many variations — regularization, norm constraints, dropout, batch / online / mini-batch, and so on.

RPROP is an interesting variation of batch back-propagation. One of the key differences between RPROP and standard back-prop is that in RPROP each weight and bias has a different, variable, implied learning rate — as opposed to standard back-prop which has one fixed learning rate for all weights and biases. Put differently, with RPROP you don’t specify a learning rate. Instead, each weight has a delta value that increases when the gradient doesn’t change sign (meaning you’re headed in the correct direction) or decreases when the gradient does change sign.

In high-level pseudo-code, for one pass through the training data, RPROP is something like:

for each weight and bias
  if prev grad and curr grad have same sign
    increase the previously used delta
    update weight using new delta
  else if prev and curr have different signs
    decrease the previously used delta
    revert weight to prev value
  end if
  prev delta = new delta
  prev gradient = curr gradient
end-for

The algorithm’s details are quite tricky. I suspect that even though RPROP is often more effective than regular back-prop, RPROP isn’t used very often because it’s very tricky to implement, and the fact that the authors of RPRROP created several improved versions of the basic RPROP algorithm, which caused confusion.

Anyway, just for kicks, I coded up an RPROP demo using raw Python. The RPROP version gave slightly better results than the standard back-prop version.

The bottom line is that, in most situations, the relatively minor improvement that RPROP gives isn’t worth the implementation effort. However, like many things in machine learning, I believe it’d be a mistake to dismiss RPROP entirely — algorithms have a way of returning.

Advertisements
This entry was posted in Machine Learning. Bookmark the permalink.

One Response to Neural Network Resilient Back-Propagation

  1. PGT-ART says:

    Sounds interesting perhaps this can sort out the nodes that are more often right.
    As compared to nodes flip around a lot, and thus could be removed (minimize overfitting).

Comments are closed.