I wrote an article titled “Neural Network Regression” in the March 2016 issue of Microsoft’s MSDN Magazine. See https://msdn.microsoft.com/en-us/magazine/mt683800.aspx.
A regression problem is one where you predict a numeric value from a set of numeric and non-numeric variables. For example, you might want to predict the annual income of a person based on their years of education, age, sex, and so on.
There are several forms of regression. The simplest form is linear regression. The most powerful form is neural network regression.
In the MSDN article I use an artificial example where the goal is to predict the value of the sin(x) — it’s a surprisingly difficult problem that stumps most forms of regression other than neural network regression.
Creating a neural network isn’t too difficult, but training a neural network is difficult and is both art and science. The payoff is that neural network regression can solve certain types of problems that most other forms of regression just can’t solve. The underlying reason for this is explained by something called the Cybenko Theorem.