Deep Neural Network Implementation

I’ve been looking closely at deep neural networks (DNNs). A regular feed-forward neural network (FFN) can be thought of as a complicated math function that accepts some numeric inputs values (such as a person’s age, sex where male = -1, female = +1), and so on, and spits out numeric values that represents probabilities of a class (for example, the probabilities of the person being a political Democrat, a Republican, or Other).

A FFN has internal processing nodes in what’s called the hidden layer. The most basic form of a DNN has multiple hidden layers. This makes the neural net more powerful and able to handle harder problems. Coding a FFN is rather challenging. Coding a DNN is very challenging.

Whenever I start coding a complicated algorithm, I like to start with a concrete example and write code highly specific to that example. This preliminary prototype system gives me insights into what I need to do to write a general purpose version.

So, my first step in creating a DNN was to code a simple version. I wrote a tiny program with a DNN that has 2 input nodes, three hidden layers with 4, 2, 2 processing nodes, and 3 output nodes. Then I set up the internal weights and bias parameters of the DNN (which is a complicated topic).

When I fed input values of (1.0, 2.0) to the dummy DNN, the calculated output values were (0.3269, 0.3333, 0.3398) which I verified were correct by manually calculating the output values. At this point I was ready to create a second prototype DNN that could accept a variable number of input node values, variable number of hidden layer nodes, and so on.

My friends who are not software developers will often ask me what I do at work. It’s hard to explain what a software developer’s job is like, but a large part of it involves making software prototypes because in most situations it’s not possible to create a complicated software system correctly the first time.


This entry was posted in Machine Learning. Bookmark the permalink.

2 Responses to Deep Neural Network Implementation

  1. K.Y. Lin says:

    Come on, you can do it!! Waiting for your trainable-DNN source code.

Comments are closed.