Introduction to Keras with TensorFlow

I wrote an article titled “Introduction to Keras with TensorFlow” in the May 2018 issue of Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2018/05/01/inroduction-to-keras.aspx.

It’s possible to create neural networks from raw code. But there are many code libraries you can use to speed up the process. These libraries include Microsoft CNTK, Google TensorFlow, Theano, PyTorch, scikit-learn and Caffe. Most neural network libraries are written in C++ for performance but have a Python API for convenience.
In my article I demonstrated how to get started with the popular Keras library. Keras is a bit unusual because it’s a high-level wrapper over TensorFlow. The idea is that TensorFlow works at a relatively low level and coding directly with TensorFlow is very challenging. Put another way, you write Keras code using Python. The Keras code calls into the TensorFlow library, which does all the work.

I did the standard Iris dataset example where the goal is to predict species (“setosa” or “versicolor” or “virginica”) from four predictors: petal length and width, and sepal length and width (a sepal is a leaf-like structure).

I like Keras a lot, but it does have disadvantages. For me, Keras is easy to use but is relatively hard to customize — a classic code library tradeoff.

I slightly prefer the Microsoft CNTK library to Keras — mostly for technical reasons. But the use of Keras seems to be increasing much faster than the use of CNTK, even though I have no solid numbers (I base my opinion on subjective things like number of posts in Stack Overflow so I could well be wrong).

If you’re a software developer, you might want to consider taking Keras out for a test drive.



The word “keras” means “horn” in Greek. Trust me, you will get some strange results if you do an Internet image search for “horn”. This horn hairstyle is striking but doesn’t look practical for daily use.

Advertisements
This entry was posted in Keras, Machine Learning. Bookmark the permalink.