I’ve been brushing up on my Python programming language skills. One thing I like to do with any language is implement a simple feed-forward neural network. The code to create a neural network uses all the basic control structures and language features (if-then, for-loops, while-loops, string concatenation, matrices, arrays, etc.)
So, I tackled a neural network with a back-propagation plus momentum training algorithm. In addition to being a great way for me to get back in tune with Python, I now have an experimentation platform to investigate new algorithms related to neural networks. For example, both the Microsoft CNTK ad Google TensorFlow code libraries have a relatively new (since about 2015) optimization algorithm called Adam (“Adaptive Moment Estimation”) that is very fast compared to basic stochastic gradient descent optimization.
I found an excellent blog post by a student named Sebastian Ruder that gave the best explanation of the Adam algorithm I’ve seen, at http://sebastianruder.com/optimizing-gradient-descent/index.html#adam. But I still won’t be fully satisfied I understand Adam until I implement it inside my Python neural network code.
I’m fairly experience with neural networks, but there are continuous new algorithms that appear ever few months. It’s a very exciting time to be involved with machine learning.