Why I Prefer Keras and CNTK to PyTorch

There are many neural network code libraries. The two I like best are Microsoft CNTK and Google Keras (over TensorFlow). I am not a fan of PyTorch.

There’s nothing technically wrong with PyTorch and many of my colleagues use it as their neural network library of choice. But I find that PyTorch just doesn’t feel right for me. In the years before NN libraries, I coded neural networks from scratch many times, so I have a very good understanding of what goes on behind the scenes. But when I use PyTorch, the API doesn’t match my cognitive understanding of NNs. There’s a weird dissonance that, well, just doesn’t feel right to me.

When I use CNTK, I have a good idea of how the CNTK code maps to fundamental NN code operations. The same is true when I use Keras or even raw TensorFlow.

Now, none of this would matter except for one additional fact: the documentation for PyTorch is absolutely horrendous. Trying to find information about PyTorch is often an exercise in futility. If the PyTorch documentation was good, I’d be able to construct my mental mapping.

It will be interesting to see what happens over the next two years. Will just one or two NN libraries emerge as de facto standards? Or will there continue to be several libraries, all having significant usage in the ML developer community? I’m usually not shy about making guesses, but this is one question where I have no idea what will happen.

# iris_pytorch.py
# WORK IN PROGRESS
# DOES NOT WORK YET

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
torch.manual_seed(1)

input_dim = 4; hidden_dim = 5; output_dim = 3
lr = 0.01
max_epochs = 500

class Net(nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.fc1 = nn.Linear(input_dim, hidden_dim)
    self.fc2 = nn.Linear(hidden_dim, output_dim)

  def forward(self, x):
    x = nn.functional.relu(self.fc1(x))
    x = self.fc2(x)
    return F.log_softmax(x, dim=1)
model = Net()

train_file = ".\\Data\\iris_train_data.txt"
test_file = ".\\Data\\iris_test_data.txt"
train_x = np.loadtxt(train_file, usecols=[0,1,2,3],
  delimiter=",", dtype=np.float32)
train_y = np.loadtxt(train_file, usecols=[4,5,6],
  delimiter=",", dtype=np.float32)

my_loss = nn.CrossEntropyLoss()
opt = torch.optim.SGD(model.parameters(), lr=lr)

# train
for batch_idx in range(max_epochs):
  # need to extract batches from train_x and train_y) here
  X = Variable(torch.Tensor(train_x).float())
  Y = Variable(torch.Tensor(train_y).long())

  opt.zero_grad()
  outpt = model(X)
  loss = my_loss(outpt, Y)  # errors out here . . .
  loss.backward()
  opt.step()

  if (batch_idx) % 100 == 0:
    print ('batch_idx [%d/%d] Loss: %.4f'%(batch_idx+1, \
max_epochs, loss.data[0]))

# evaluate
X = Variable(torch.Tensor(test_x).float())
Y = torch.Tensor(test_y).long()
outpt = model(X)
_, predicted = torch.max(outpt.data, 1)
print('Accuracy of the network %d %%' % \
(100 * torch.sum(Y==predicted) / 30))


“Preference Game”, David Gray.

Advertisements
This entry was posted in Keras, Machine Learning. Bookmark the permalink.

2 Responses to Why I Prefer Keras and CNTK to PyTorch

Comments are closed.