I wrote an article titled “Kernel Perceptrons using C#” in the April 2017 issue of MSDN Magazine. See https://msdn.microsoft.com/en-us/magazine/mt797653.
A kernel perceptron is a machine learning technique that can be used to make a binary prediction — one where the thing-to-be-predicted can take on just one of two values. For example, you might want to predict if a person is Male (-1) or Female (+1) based on Age, Income, and Education.
Ordinary perceptrons are really just a curiosity because they can only predict in situations where you have what’s called linearly separable data (you can draw a straight line to separate). But by applying the “kernel trick” you can create perceptrons that can handle more complex data like this:
The kernel trick is based on a so-called kernel function. There are many such functions, but the most common is called the radial basis function (RBF) kernel. RBF is a measure of similarity between two numeric vectors where an RBF(v1, v2) = 1.0 indicates the vectors are equal, and smaller values, approaching 0.0, indicate more different.
Briefly, to make a binary prediction, a kernel perceptron computes the RBF similarity between the item to be predicted and all training items (data with known input values and known, correct classification values) and aggregates those RBF similarity values to make a prediction.
Sadly, kernel perceptrons are now just curiosities because there are more powerful techniques, notably binary neural network classifiers, and kernel logistic regression. But kernel perceptrons are a good introduction to the math and ideas of kernel methods in general.