I Give a Talk About Anomaly Detection Using a Neural Autoencoder with PyTorch

Anomaly detection is a very difficult problem. I’ve been experimenting with a technique that I couldn’t find any research or practical information about. Briefly, to find anomalous data, create a neural autoencoder and then analyze each data item for reconstruction error — the items that have the highest error are (maybe) the most anomalous.

I normally wouldn’t give a talk on a topic where I don’t fully understand all the details. But, I’m working with a team in my large tech company, and if my autoencoder reconstruction idea is valid, the technique will be extremely valuable to them.

As always, when I presented the details, the attendees in the audience asked great questions which forced me to think very deeply. (The people at my company are very smart for the most part). This details-are-important fact is characteristic of the research in machine learning I’m doing.

Here’s one of at least a dozen examples (which will only make sense if you understand neural autoencoders). The dataset had 784 input values — the MNIST image dataset where each value is a pixel value between 0 and 255, normalized t between 0.0 and 1.0. My demo autoencoder had a 784-100-50-100-784 architecture. The hidden layers used tanh activation, and I applied tanh activation to the output layer too.

But the question is, why not sigmoid activation, or ReLU activation, or even no/identity activation on the output layer? The logic is that because the input values are between 0.0 and 1.0, and an autoencoder predicts its inputs, you surely want the output values to be confined to 0.0 to 1.0 which can be accomplished using sigmoid activation. Why did I use tanh output activation?

Well, the answer is long, so I won’t try. My real point is that this was just one of many details about the autoencoder reconstruction error technique for anomaly detection. And on top of all the conceptual ideas, I used the PyTorch neural network library so there were many language and engineering issues to consider too.

Anyway, I thought I did a good job on my talk, and I get as much value from delivering it as the attendees who listed to it.



Artist Mort Kunstler (b. 1931) created many memorable paintings that were used for the covers of men’s adventure magazines in the 1960s. I’m not really sure if the works are supposed to be satire or not. Kunstler’s paintings have an extremely high level of detail.

This entry was posted in Machine Learning, PyTorch. Bookmark the permalink.