What Are Progressive Neural Networks for Transfer Learning?

Deep neural networks have made incredible progress in tabular data classification and regression, natural language processing, and image recognition. But one of the weaknesses of DNNs is that they are very problem-specific. A DNN trained to play chess doesn’t do well when it’s trying to play checkers.

The idea of somehow using information in a DNN trained for one task for a second, related task is called transfer learning. There are many approaches for transfer learning. One technique for transfer earning is to use a progressive neural network (PNN).

The image below comes from the source paper that first described PNNs. “Progressive Neural Networks” (2016) by A. Rusu, N. Rabinowitz, et al.



Suppose your goal is to create a DNN that plays Othello (aka Reversi). And suppose you have two existing deep neural networks that have been trained to play chess and checkers.

The chess (DNN1) and checkers (DNN2) networks each have two hidden layers (h1, h2). In the diagram, they are in the left two columns. The third untrained goal network (DNN3) for Othello is on the right.

You construct the Othello DNN3 network so that it indirectly uses the weights from trained chess DNN1 and checkers DNN2. This is indicated by the diagonal lines. Now you train the Othello DNN3 network. This retains part of the information from the chess DNN1 and checkers DNN2 networks, and partially transfers that information into the new Othello DNN3 network.

As usual with deep neural network architectures, the devil is in the details. The original PNN rsearch paper uses “Adapters” to pass weight and bias information to the goal DNN.

This is all interesting stuff. But based on my experience, I’m highly skeptical about all existing neural network transfer learning techniques, including PNN architecture. I suspect that neural networks are inherently problem-specific and that techniques designed to transfer information from one problem to another just don’t work well.

But I could be wrong and progressive neural networks are interesting in their own right.



British artist Roger Dean (b. 1944) has a style that is often labeled “progressive”. Here are three interesting video game box cover art works by Dean. Left: “Terrorpods” (1987). Center: “Shadow of the Beast II” (1990). Right: “Obliterator” (1988).


This entry was posted in Machine Learning. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s