I wrote an article titled “Logistic Regression using R” in the December 2016 issue of Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2016/12/01/logistic-regression-using-r.aspx.
I consider logistic regression to be more-or-less the “Hello World” basic example of machine learning. Logistic regression is used in situations where you want to predict an outcome that can be just one of two possible values. This is called binary classification. For example, you might want to predict the political leaning (conservative or liberal) of a person based on their age, annual income, and years of education.
The R language doesn’t have a dedicated built-in function named something like log.regression() as you might expect. Instead, R has a general purpose function named lm(), which stands for “linear model”.
In my article, I walk through an example of using lm() to perform logistic regression. I focus on how-to-do rather than the fascinating but rather complicated theory of logistic regression.
The major disadvantage of logistic regression compared to other classification techniques is that logistic regression can only work on relatively problems where the data to be classified is what is called “linearly separable”. Even so, logistic regression can be a useful tool in some scenarios.