When I was first studying machine learning, I sometimes wondered about the relationship between logistic regression and neural networks. When I did an Internet search on the topic recently, I saw all kinds of rather confusing information.
Logistic regression is a ML technique used to predict in situations where there are exactly two possibilities. For example, you might want to predict the sex of a person (0 = male, 1 = female) based on three predictor variables such as age, height, and annual income.
In my mind, a good way to compare logistic regression to a neural network is to understand that you can simulate logistic regression with a neural network that has one hidden layer with a single hidden node and the identity activation function, and a single output node with the logistic sigmoid activation function.
I created a diagram with a concrete example. In each case the final computed output is p = 0.5474, which corresponds to a prediction of class = 1 because the p (probability) is greater than 0.50.
The diagram tells the story. Notice that the NN hidden node has a bias value that corresponds to the bias in LR. The NN output node has a bias of 0. The single hidden-to-output weight has constant value of 1.
Anyway, there’s no real moral to this story. But understanding the relationships between logistic regression and an equivalent network is interesting (well, to me anyway).