One rainy Sunday afternoon, I spent some time taking a close look at kernel logistic regression. By close look I mean reviewing the research papers on the topic and the coding up a demo. For me, I don’t feel I truly understand a machine learning topic unless I can implement a demo.
Kernel logistic regression (KLR) is a ML classification technique that’s a bit difficult to explain — both what it is and how it works. Briefly, KLR creates a prediction model for situations where the thing to classify/predict can be one of two possible classes. For example, you can use KLR to predict the sex (male = 0, female = 1) of a person based on their height, weight, and annual income.
KLR is a variation of regular logistic regression. Regular (as in “ordinary”) logistic regression only works in situations where the data to classify/predict is simple in the sense that’s it’s what is called linearly separable. KLR can handle more complicated data that’s not linearly separable.
What was really interesting to me is that I found quite a few research papers on KLR but virtually no information about how to implement a KLR model. When I dove into KLRs, I discovered there were many tricky implementation details, which leads me to suspect that very few people have actually implemented a KLR.
Anyway, my KLR demo was a very interesting challenge. Someday when I get a chance, I’ll write up and publish a complete explanation of how to actually create a KLR prediction model.