Among software developers, there a tremendous thirst for knowledge about neural networks, especially deep neural networks. In order to fully understand deep neural networks, you really have to understand simple NNs that have a single hidden layer.
Even simple feed-forward neural networks (FNNs) can be very, very complicated when you include training, encoding, activation functions, and so on. So, the first step for understanding FNNs is to completely understand the neural network feed-forward mechanism.
I gave a talk on that topic at Microsoft. In my opinion, you can only fully understand FNNs if you can implement them in code, from scratch. So I provided the attendees with a raw Python (with NumPy) demo program. After I finished explaining exactly how the NN input-output mechanism works, I gave the attendees some ideas about how they could modify the demo code. I reminded the audience that this is how developers learn — you get a demo program to work, then you modify the program to see what happens.
There was a lot of interest in my talk and several hundred people watched the talk online as it was streamed. And many other people have watched the recording of the talk.
I speak at quite a few conferences. Maybe I can do my NN input-output talk again to a larger audience.