Getting Ready for the PyTorch 2.0 Neural Network Library

The PyTorch web site announced that PyTorch 2.0 is scheduled to be released sometime in March 2023. This is a big deal because major versions (1.0, 2.0, 3.0, etc.) only appear once every few years.

I figured I’d investigate version 2.0. Bottom line: My experiment was only partially successful. Specifically, I was able to get a nightly build version of PyTorch 2.0 installed on a Windows 10/11 machine, but the key feature of 2.0 — the compile() function — wasn’t supported on Windows yet. I don’t fully understand the compile() function but I think it’s purpose is to improve speed/performance.

Update: I also tried PyTorch 2.0 on a MacOS machine using the February 3 nightly build but I wasn’t successful.

First, I upgraded my Python from my current version 3.7.6 to version 3.9.13 by installing Anaconda3-2022.10. I think Python 3.9 is required for PyTorch 2.0 but I’m not sure. Versioning is always a big problem in the Python/PyTorch ecosystem.

Next, I got one of my standard PyTorch multi-class classification demos running on the current PyTorch version 1.13.1.

And next, I went to the nightly build repository at https://download.pytorch.org/whl/nightly/torch/ and downloaded the most recent torch-2.0.0.dev20230122+cpu-cp39-cp39-win_amd64.whl file. I used pip to uninstall PyTorch version 1.13.1 and then I installed the development version of 2.0.

My demo network was:

import torch as T

class Net(T.nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.hid1 = T.nn.Linear(6, 10)  # 6-(10-10)-3
    self.hid2 = T.nn.Linear(10, 10)
    self.oupt = T.nn.Linear(10, 3)

    T.nn.init.xavier_uniform_(self.hid1.weight)
    T.nn.init.zeros_(self.hid1.bias)
    T.nn.init.xavier_uniform_(self.hid2.weight)
    T.nn.init.zeros_(self.hid2.bias)
    T.nn.init.xavier_uniform_(self.oupt.weight)
    T.nn.init.zeros_(self.oupt.bias)

  def forward(self, x):
    z = T.tanh(self.hid1(x))
    z = T.tanh(self.hid2(z))
    z = T.log_softmax(self.oupt(z), dim=1)  # NLLLoss() 
    return z

The statements to use the new compile() function were:

  print("Creating 6-(10-10)-3 neural network ")
  net_basic = Net().to(device)
  net = T.compile(net_basic)
  net.train()  # set into training mode

Alas, when I ran the program I got an error message of, “UserWarning: Windows is not currently supported, torch.compile() will do nothing.”

Oh well, the process could have been much more painful and I’m one step closer to installing and using PyTorch 2.0.



The first generation of jet fighters were those developed in the 1940s, at the end of World War II. The second generation of fighters were those developed in the 1950s. There were many successful second generation jet fighters, such as the Lockheed F-104 Starfighter and the Vought F-8 Crusader. But there were many unsuccessful second generation fighter designs too.

Left: The Curtiss-Wright XF-87 Blackhawk. Although the XF-89 was a good plane, a competing design, the Northrop F-89 Scorpion, was judged superior and went into production.

Center: The McDonnell XF-88 Voodoo was a good design but it was canceled due to budget cuts related to the Korean War. A few years later, the design was upgraded and put into production as the successful F-101 Voodoo.

Right: The Lockheed XF-90 was an absolutely beautiful plane, but it was too heavy and underpowered, and so it never went into production. The poor XF-90 never even got a nickname. Some of the technology developed for the XF-90 was later used for the successful F-104.


This entry was posted in PyTorch. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s