Deep Neural Network IO using C#

I wrote an article titled “Deep Neural Network IO using C#” in the August 2017 issue of MSDN Magazine. See https://msdn.microsoft.com/en-us/magazine/mt493293.

The term deep neural network (DNN) can have several different meanings. A basic DNN is just a regular feed-forward neural network (FNN) but with two or more hidden layers. The additional hidden layers give the DNN more power (in terms of predictive capability) at the expense of increased complexity. Other forms of DNNs such as convolutional neural networks and recurrent neural networks have more complicated architectures.

Coding a basic FNN is moderately challenging. Coding a basic DNN is very challenging. In my article I describe one way to set up a basic DNN using the C# language, and show how to implement the input-output process. As much as any code I’ve ever worked on, there are many possible data structure designs and architectures possible to implement a basic DNN.

There are a couple of reasons why you might want to implement a basic DNN from scratch. First, by doing so you gain a solid understanding of what goes on behind the scenes when you use a sophisticated code library such as Microsoft CNTK or Google TensorFlow. Second, in a research environment, coding a DNN from scratch allows you to explore things such as custom training algorithms and custom architectures.

I plan to write a follow-up article where I show how to train a basic DNN using the back-propagation algorithm. Quite complicated and challenging stuff, but very interesting.

Advertisements
This entry was posted in Machine Learning. Bookmark the permalink.

One Response to Deep Neural Network IO using C#

  1. PGT-ART says:

    Awesome, i really like you that you will also dive in ‘general’, DNN backpropagition over multiple hidden layers, and take CNN for another subject. Since CNN’s are mostly used in bitmap classifying NN, and there’s more then just only images for Deep neural network tasks.
    I’m realy looking forward to understand the back propagation over multiple hidden layers, in the next article.

    Perhaps there is even more then BPP, in the past you wrote about some other ways to find ideal weights (floc and genetic trials), you might perhaps use Relu now it wasnt known I think in that time period. But i’ve not yet seen how to do the weight finding adjustment process works for deep networks.

Comments are closed.