A Brilliant Research Paper

It’s rare that I’ll describe any research paper as brilliant. Most research papers are tiny increments in existing knowledge. But I came across a paper recently that truly amazes me.

The title is “Deep Learning of Grammatically-Interpretable Representations Through Question-Answering” by Hamid Palangi, Paul Smolensky, Xiaodong He, and Li Deng, all at Microsoft Research. See https://arxiv.org/abs/1705.08432.

I don’t claim to fully understand this paper but I know enough to understand the problem that the research attempts to solve, and the potential enormous impact it could have.

There have been stunning advances in artificial intelligence in the past few years. Think Siri, Cortana, and self-driving cars. Many of these advances use deep neural networks (DNNs), which you can imagine as fantastically complicated math functions. One problem is that it’s impossible to understand why a DNN gives a particular answer — a DNN has thousands or possibly even millions of numeric constants like 4.9256 that determine the DNN’s behavior.

For example, if an image recognition system identifies a photo of a school bus correctly as “bus” you don’t know why the system did so. And if a few pixels in the photo are changed, leaving the photo unchanged to the human eye, and the system now identifies the photo as “ostrich”, you just don’t know what happened.

The research paper I’m excited about takes a first step towards creating systems that truly seem to understand concepts. As the authors write, “This may be the first time it has been possible to give meaningful interpretations to internal excitation patterns in an abstract neural network, interpretations that can be part of explanations of the network’s decisions. It is just the beginning of something new, but, critically, it is the beginning of something new.”

This entry was posted in Machine Learning. Bookmark the permalink.