An ordinary perceptron is a very simple machine learning (ML) entity that can make binary predictions (one of two possible outcomes) for situations where your data is very simple (“linearly separable”).
A kernel perceptron is a variation of an ordinary perceptron that can handle more complicated, non-linearly separable data. I did a quick search of the Internet and found some good information about the theory of kernel perceptrons, but I couldn’t find any actual code implementations. So, just for fun one weekend, I coded up a kernel perceptron demo.
First, I made a dummy set of 20 data items that were not linearly separable. Then I used the algorithm in the very nice Wikipedia article on kernel perceptrons to create a prediction model.
The theory of kernel perceptrons is very deep but the basic algorithm is surprisingly simple. Even so however, it took me quite a bit longer than I expected to code the demo program. In ML, some simple tasks often have tricky implementations.