Fine-Tuning a Hugging Face DistilBERT Model for IMDB Sentiment Analysis

Over the past few weeks I’ve been walking through some of the examples in the Hugging Face (HF) code library. HF provides a set of APIs over Transformer Architecture (TA) models for natural language processing (NLP). Using HF is not simple but it’s much easier than implementing a TA system from scratch, which can take weeks of effort.

My recent HF adventure dissected an example that shows how to fine-tune an existing DistilBERT TA model to work on the well-known IMDB movie review sentiment analysis. The goal is to create a model that accepts the text of a movie review, and returns a value less than 0.5 for a negative review or a value greater than 0.5 for a positive review.

My primary takeaway was that TA systems are very, very complex. A thorough examination of the documentation example would take many days, even for someone like me who is experienced with PyTorch.

The demo program has seven major steps:

1. load raw IMDB text into memory
2. create an HF DistilBERT tokenizer
3. tokenize the raw IMDB text
4. convert raw IMDB text to PyTorch Datasets
5. load pretrained DistilBERT model
6. train / fine-tune model using IMDB data
7. save fine-tuned model

Each step is complicated. Experimenting with the demo was difficult because the IMDB dataset is large so everything — especially loading and training — takes a long time. Therefore, I took the original IMDB data, which has 25,000 reviews (12,500 train, 12,500 test with half of each positive and half negative) and chopped it down to just 200 train (100 positive, 100 negative) and 200 test (100 positive, 100 negative).

I now have a pretty good grasp of the main ideas of fine-tuning an HF TA model. Transformer architecture models aren’t science fiction but understanding them does take effort.

Here are three of my favorite science fiction movies of the 1950s that feature flying saucer style spaceships. My reviews of all three movies would have “positive” sentiment. Left: “Forbidden Planet” (1956). Center: “Invaders from Mars” (1953). Right: “Earth vs. the Flying Saucers” (1956).

Demo code:

# fine-tune an HF pretrained model for IMDB sentiment analysis
# zipped raw data at:

from pathlib import Path
from transformers import DistilBertTokenizerFast
import torch
from import DataLoader
from transformers import DistilBertForSequenceClassification, \
from transformers import logging  # to suppress warnings

device = torch.device('cpu')

class IMDbDataset(
  def __init__(self, encodings, labels):
    self.encodings = encodings
    self.labels = labels

  def __getitem__(self, idx):
    item = {key: torch.tensor(val[idx]) for key, val \
      in self.encodings.items()}
    item['labels'] = torch.tensor(self.labels[idx])
    return item

  def __len__(self):
    return len(self.labels)

def read_imdb_split(split_dir):
  split_dir = Path(split_dir)
  texts = []
  labels = []
  for label_dir in ["pos", "neg"]:
    for text_file in (split_dir/label_dir).iterdir():
      labels.append(0 if label_dir is "neg" else 1)
  return texts, labels

def main():
  print("\nBegin IMDB sentiment using HugFace library ")
  logging.set_verbosity_error()  # suppress wordy warnings

  print("\nLoading data from file into memory ")
  train_texts, train_labels = \
  test_texts, test_labels = \
  print("Done ")

  print("\nTokenizing train, validate, test text ")
  tokenizer = \
  train_encodings = \
    tokenizer(train_texts, truncation=True, padding=True)
  test_encodings = \
    tokenizer(test_texts, truncation=True, padding=True)
  print("Done ")

  print("\nLoading tokenized text into Pytorch Datasets ")
  train_dataset = IMDbDataset(train_encodings, train_labels)
  test_dataset = IMDbDataset(test_encodings, test_labels)
  print("Done ")

  print("\nLoading pre-trained DistilBERT model ")
  model = \
    DistilBertForSequenceClassification.from_pretrained( \
  model.train()  # set mode
  print("Done ")

  print("\nLoading Dataset bat_size = 10 ")
  train_loader = DataLoader(train_dataset, \
    batch_size=10, shuffle=True)
  print("Done ")

  print("\nFine-tuning the model ")
  optim = AdamW(model.parameters(), lr=5e-5)
  for epoch in range(3):
    epoch_loss = 0.0
    for (b_ix, batch) in enumerate(train_loader):
      input_ids = batch['input_ids'].to(device)
      attention_mask = batch['attention_mask'].to(device)
      labels = batch['labels'].to(device)
      outputs = model(input_ids, \
        attention_mask=attention_mask, labels=labels)
      loss = outputs[0]
      epoch_loss += loss.item()  # accumulate batch loss
      if b_ix % 5 == 0:  # 200 train items, 20 batches of 10
        print(" batch = %5d curr batch loss = %0.4f " % \
        (b_ix, loss.item()))
      print("end epoch = %4d  epoch loss = %0.4f " % \
      (epoch, epoch_loss))

  print("Training done ")

  print("\nSaving tuned model state ")
  model.eval(), \
    ".\\Models\\")  # just state
  print("Done ")
  print("\nEnd demo ")

if __name__ == "__main__":
This entry was posted in PyTorch. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s