Sentiment Analysis using a PyTorch Neural Network with an EmbeddingBag Layer

In computer science, and life, it helps to be smart but it’s also important to have determination. I’m not the smartest guy in the Universe, but once a problem gets stuck in my head it will stay there until it gets solved.

I’ve been looking at sentiment analysis using a PyTorch neural network with an EmbeddingBag layer. I started by looking at an example in the PyTorch documentation, but that example used the AG News dataset which has 1,000,000 short news snippets, which makes it extremely difficult to work with when you’re trying to dissect the example. Additionally, the demo used a built-in torchtext.datasets.AG_NEWS() class which magically serves up data in a special format — in real life you must deal with data wrangling yourself.

So, over the past couple of months I’ve been slowly but surely dissecting the documentation example so that I could create my own system. I hit a milestone recently when I got a complete end-to-end example working. I created 20 tiny movie reviews, each of which is labeled as 0 (negative sentiment) or 1 (positive). The goal is to train a neural model to correctly classify a tiny review as positive or negative.

In most natural language processing (NLP) problem scenarios, each word in a sequence/sentence is converted to an integer index using a Vocabulary object, and then the index representing the word is converted to a numeric vector of about 100 values, called a word embedding. Each word embedding is sequentially fed to an extremely complex neural system — typically an LSTM for moderate length input or a Transformer for long length input.

An EmbeddingBag layer converts an entire sequence/sentence to a numeric vector. This is dramatically simpler than a word embedding approach — but still extremely tricky (just like all NLP problems).

I intend to tidy up my demo program and write up an explanation and then publish it in Microsoft Visual Studio Magazine. Even though the demo program is only about 200 lines long, it is very dense in terms of ideas so my explanation will likely take two or three articles.

People who aren’t programmers or developers or data scientists don’t understand our world (if you’re reading this blog post, you are probably part of “our world”). We don’t relentlessly work on difficult problems because of some external force — we do so because our brains are wired that way.

The history of computer science is one largely of men who had relentless determination to create. Left: Wilhelm Schickard (1592–1635) designed, but did not build, a “calculating clock” that would have performed addition, subtraction, multiplication and division. Center: The Z1 mechanical computer was built by Konrad Zuse (1910-1995) in 1937. It weighed about 2,000 pounds and had 20,000 parts. The Z1 contained almost all the parts of a modern computer but wasn’t reliable. Right: In 1978, some MIT students built a tinker toy mechanical computer from 10,000 parts and fishing line. It was hard-wired to play tic-tac-toe.

This entry was posted in PyTorch. Bookmark the permalink.

2 Responses to Sentiment Analysis using a PyTorch Neural Network with an EmbeddingBag Layer

  1. Thorsten Kleppe says:

    I love “our world” because it gives us a mission for over 100 lifetimes. When people talk about the fathers of machine learning, I always think of you, James. And if you and I are where the founding fathers like Leibniz, Turing, and Rosenblatt are. I really hope that those who come after us can still learn from you. I don’t know a better teacher.

    The announcement of the article is the hammer.

    The post today is top again, the demo super understandable, the footnote a dream, as if it all goes without saying, which it doesn’t. Thanks James.

    Today you are the smartest guy I can think of. Who should be smarter?

  2. Thank you for the nice comments Thorsten. I agree with you that the scientists who come after us in the next generation will have amazing opportunities, much like the current generation of scientists benefited from the work of people like Alan Turing, Donald Knuth, John von Neumann, and Claude Shannon.
    I’m not that smart – but I do have above average persistence that allows me to work on problems until they get solved. JM

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s