I hadn’t used CNTK for a few weeks, so I figured I’d implement an autoencoder just to keep my CNTK skills fresh. CNTK is a deep neural network code library from Microsoft.
I used the UCI Digits Dataset, which has 1,797 data items. Each item has 64 numeric values, which represent the grayscale pixel values for a crude 8×8 handwritten digit (‘0’ through ‘9’). The goal of an autoencoder is to condense the 64 values of each data item down t just two values, so that each item can be displayed as an x-y point on a two-dimensional graph.
As I was writing the CNTK autoencoder program, my immediate impression was that CNTK has a much different feel to it than the two other libraries I use, TensorFlow/Keras and PyTorch. The general steps for all libraries are the same:
0. get started
1. read data into memory
2. define the NN model
3. train the model
4. save the model
5. use the model
But the implementation details are quite different for each library. The key code for my CNTK autoencoder is:
# 2. define autoencoder print("Creating a 64-32-2-32-64 autoencoder ") my_init = C.initializer.glorot_uniform(seed=1) X = C.ops.input_variable(64, np.float32) # inputs layer1 = C.layers.Dense(32, init=my_init, activation=C.ops.sigmoid)(X) layer2 = C.layers.Dense(2, init=my_init, activation=C.ops.sigmoid)(layer1) layer3 = C.layers.Dense(32, init=my_init, activation=C.ops.sigmoid)(layer2) layer4 = C.layers.Dense(64, init=my_init, activation=C.ops.sigmoid)(layer3) enc_dec = C.ops.alias(layer4) encoder = C.ops.alias(layer2) Y = C.ops.input_variable(64, np.float32) # targets
Well, after about an hour of work I got an autoencoder up and running. The moral of the story is that I wish there was a single, dominant neural network library so I could focus my attention on just one, but for now, I’ve got to practice all popular libraries.