Anomaly Detection Using Simplistic VAE Reconstruction Error

A standard technique for anomaly detection (well, since about 2017) is to feed source data (often log files) to a deep neural autoencoder (AE) and create a model. Then you feed each data item to the trained model and compare the computed output with the input to calculate reconstruction error. Data items with large reconstruction error are anomalous in some way.

There has been a lot of recent research into the idea of anomaly detection using a variational autoencoder (VAE). This idea is relatively new and mostly unexplored. A VAE is conceptually more complicated than an AE. Internally, a VAE computes two forms of error — typically cross entropy error and Kullback-Leiber divergence — a complex topic. Most of the new research in anomaly detection using a VAE looks at using the internal forms of error. I started exploring these ideas and quickly realized that it’s a huge topic and so I needed to start simply and proceed in a logical way.

As an initial exploration, I decided to use a VAE for anomaly detection in the simplest possible way, which is to ignore the complex inner workings of a VAE and its ability to generate synthetic data. Instead, the simplest idea is to just feed the VAE real data items (instead of noise as is used when generating synthetic data), compute an output, and compare the computed output with the input. Put another way, the idea is to use a VAE exactly as if it were an AE.

So, I put together a demo program. I used a dummy dataset of 240 items where each item is an Employee. The raw data looks like:

M 19 concord 32700.00 mgmt
F 22 boulder 27700.00 supp
M 39 anaheim 47100.00 tech
. . .

The normalized data looks like:

0  0.19  0 0 1  0.3270  1 0 0
1  0.22  0 1 0  0.2770  0 1 0
0  0.39  1 0 0  0.4710  0 0 1
. . .

The simplistic anomaly detection technique using VAE reconstruction error worked as expected. In the demo, I created a VAE model using the dummy 240 Employee items. Then I set up one of the Employee items for:

M  39  concord  $51,200  supp

normalized to:

0, 0.39, 0, 0, 1, 0.512, 0, 1, 0

and fed it to the VAE. The computed output (not shown in the screenshot) was:

0.48, 0.43, 0.35, 0.30, 0.31, 0.52, 0.29, 0.44, 0.27

The reconstruction error is:

err = [(0 – .48)^2 + (.39 – .43)^2 + . . + (0 – .27)^2] / 9
= 0.1521

My next steps will be to explore anomaly detection using the complex internal error representations of a VAE. It’s a big topic but an interesting topic. It would be possible to, quite literally, devote months or even years to exploring anomaly detection using deep neural architectures such as AEs, VAEs, and Transformers. Good fun.



The Coyote had many complex plans to catch the Roadrunner. None of them worked, but at least they all failed in entertaining ways. Part of the fun was viewing the Coyote’s setup and then anticipating what was going to go wrong. Rocket. Catapult. Spring.

This entry was posted in PyTorch. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s