I wrote an article titled “Neural Network Activation Functions in C#” that describes how to implement the four most common activation functions used in artificial neural networks. The article appears in the June 2013 issue of Visual Studio Magazine. See http://visualstudiomagazine.com/articles/2013/06/01/neural-network-activation-functions.aspx.
Although there are several good standalone tools that can perform neural network analysis (such as Weka), integrating such tools into a software system can be difficult, customizing them to meet a specific scenario may be impossible, and there may be hidden copyright issues. Furthermore, existing neural network tools and API sets are always designed to be very general in nature, which makes them overly-complicated with regards to performing one specific task. Therefore, in many situations, I code neural network code from scratch.
When I was first working with neural network code, the effort required was rather high. But now, my development time is much, much quicker if I code from scratch than if I had to first learn, then use, someone else’s neural network code base. This is sort of the classic software development do-it-from-scratch vs. use-existing-code trade off.
Anyway, neural network activation functions aren’t difficult to understand or to implement. The tricky part is knowing when to use a particular activation function, which really requires that you understand the relationships between data (numeric, binary, categorical), training method (back-propagation, particle swarm, etc.), error (mean squared error, cross-entropy), and other details such as whether or not you’re using weight decay.