I use Windows OS machines for most of my work, but I also use MacOS machines and Linux machines too. I try to keep in practice with all three platforms, and so one morning, I figured I’d run the latest version of one of my standard PyTorch demo programs on my MacBook laptop.
Windows Mac -------------------------------- Notepad TextEdit Ctrl-c Command-c PrtScn key Shift-Command-3 File Explorer Finder Chrome Safari cmd Terminal (bash) dir ls md mkdir cls clear
I logged in to my MacBook and then opened a Terminal (bash) shell and I checked to make sure my existing Python 3.7.6 and PyTorch 1.12.1-CPU were installed and working correctly, and they were.
My demo was a regression problem. The goal of a regression problem is to predict a single numeric value. For my demo, the goal is to predict a person’s income, from sex (male = -1, female = +1), age, state (Michigan, Nebraska, Oklahoma), and political leaning (conservative, moderate, liberal). See the data and program for the Windows version at https://jamesmccaffrey.wordpress.com/2022/10/10/regression-people-income-using-pytorch-1-12-on-windows-10-11/.
I copied the training and test data from the page-link above and saved as people_train.txt and people_test.txt. The data looks like:
# sex age state income politics # ----------------------------- 1, 0.24, 1,0,0, 0.2950, 0,0,1 -1, 0.39, 0,0,1, 0.5120, 0,1,0 1, 0.63, 0,1,0, 0.7580, 1,0,0 -1, 0.36, 1,0,0, 0.4450, 0,1,0 . . .
The network definition looks like:
class Net(T.nn.Module): def __init__(self): super(Net, self).__init__() self.hid1 = T.nn.Linear(8, 10) # 8-(10-10)-1 self.hid2 = T.nn.Linear(10, 10) self.oupt = T.nn.Linear(10, 1) T.nn.init.xavier_uniform_(self.hid1.weight) T.nn.init.zeros_(self.hid1.bias) T.nn.init.xavier_uniform_(self.hid2.weight) T.nn.init.zeros_(self.hid2.bias) T.nn.init.xavier_uniform_(self.oupt.weight) T.nn.init.zeros_(self.oupt.bias) def forward(self, x): z = T.tanh(self.hid1(x)) z = T.tanh(self.hid2(z)) z = self.oupt(z) # regression: no activation return z
I implemented program-defined train() and accuracy() functions. The train() function is:
def train(model, ds, bs, lr, me, le): # dataset, bat_size, lrn_rate, max_epochs, log interval train_ldr = T.utils.data.DataLoader(ds, batch_size=bs, shuffle=True) loss_func = T.nn.MSELoss() optimizer = T.optim.Adam(model.parameters(), lr=lr) for epoch in range(0, me): epoch_loss = 0.0 # for one full epoch for (b_idx, batch) in enumerate(train_ldr): X = batch # predictors y = batch # target income optimizer.zero_grad() oupt = model(X) loss_val = loss_func(oupt, y) # a tensor epoch_loss += loss_val.item() # accumulate loss_val.backward() # compute gradients optimizer.step() # update weights if epoch % le == 0: print("epoch = %4d | loss = %0.4f" % (epoch, epoch_loss))
After saving the data, I slightly edited the Window version of the Torch program by changing the Windows “\\” file path separators to the Linux-based “/” separators. I also edited the PeopleDatset class to use comma characters as the delimiter instead of the tab characters that I used for the Windows version.
After correcting a couple of the usual minor typos, the regression program ran on my MacBook.
According to the not-so-always-accurate Internet, three jobs that have surprisingly high incomes are: 1.) crime scene cleanup, 2.) exotic dancer, 3.) movie cat wrangler. For me these are: 1.) no thanks, 2.) wrong gender, 3.) not qualified.