I Give a Talk on Neural Network Error and Accuracy

Becoming an expert with neural networks is difficult and easy at the same time. The process is difficult because there are lots of things you have to learn — maybe 40-50 key concepts. The process is easy because each of these topics is relatively simple. But the process is difficult because there are many relationships between the topics that must be learned too.

Error and accuracy is a good example. In my talk I first explained the mathematical idea of cross entropy error (also called “log-loss” — multiple terms for the same concept in machine learning are an obstacle to). Then I explained how cross entropy error works in the specific case of neural networks. And then I explained how the choice of cross entropy error instead of mean squared error affects the back-propagation algorithm. And then I presented code examples of how everything works.

Whew! That was a lot of information. And even so, I left out many details. To summarize my talk:

1. Cross entropy error compares predicted probabilities with actual probabilities.
2. The equation is “minus the sum of the log of predicted times actual”.
3. For neural classification, all but one term vanishes because all but one actual probability is 0.
4. For binary classification, the equation simplifies and looks different but is actually the same.
5. If you use cross entropy error, the calculation of the weight-delta for back-propagation simplifies.

There’s a well-known thought that it takes about 10,000 hours of studying something to become an expert. Well, that really all depends on your definitions. For sure, becoming an expert in neural networks takes at least several thousand hours of study, but most people can become effective practical users of neural models with only a few hundred hours of study.



The Hall of Education at the 1964 World’s Fair in New York

Advertisements
This entry was posted in Machine Learning. Bookmark the permalink.