Category Archives: PyTorch

Regression Using PyTorch 1.12.1-CPU on MacOS

I use Windows OS machines for most of my work, but I also use MacOS machines and Linux machines too. I try to keep in practice with all three platforms, and so one morning, I figured I’d run the latest … Continue reading

Posted in PyTorch | Leave a comment

Binary Classification Using PyTorch 1.12.1-CPU on MacOS

I regularly use Windows OS (most of the time), MacOS (less often), and Linux (occasionally). I try to keep in practice with all three platforms, and so one morning, I figured I’d run the latest version of one of my … Continue reading

Posted in PyTorch | Leave a comment

Revisiting Neural Warm-Start Shrink-Perturb

Warm-start training can occur when there is a neural network model and new data arrives every so often. One example is a system that predicts house prices, and new sales data arrives every week. In pure warm-start training, the existing … Continue reading

Posted in Machine Learning, PyTorch | Leave a comment

Computing log_softmax() for PyTorch Directly

In a PyTorch multi-class classification problem, the basic architecture is to apply log_softmax() activation on the output nodes, in conjunction with NLLLoss() during training. It’s possible to compute softmax() and then apply log() but it’s slightly more efficient to compute … Continue reading

Posted in PyTorch | Leave a comment

Custom Loss Functions for PyTorch

The PyTorch neural network code library has built-in loss functions that can handle most scenarios. Examples include NLLLoss() and CrossEntropyLoss() for multi-class classification, BCELoss() for binary classification, and MSELoss() and L1Loss() for regression. Because PyTorch works at a low level … Continue reading

Posted in PyTorch | Leave a comment

Multi-Class Classification Using PyTorch 1.12.1-CPU on MacOS

I do most of my work on Windows OS machines. One morning I noticed that my MacBook laptop in my office was collecting dust so I figured I’d upgrade the existing PyTorch 1.10.0 to version 1.12.1 to make sure there … Continue reading

Posted in PyTorch | Leave a comment

“Regression Using PyTorch, Part 1: New Best Practices” in Visual Studio Magazine

I wrote an article titled “Regression Using PyTorch, Part 1: New Best Practices” in the November 2022 edition of Microsoft Visual Studio Magazine. See https://visualstudiomagazine.com/articles/2022/11/01/pytorch-regression.aspx. A regression problem is one where the goal is to predict a single numeric value. … Continue reading

Posted in PyTorch | Leave a comment

Analyzing PyTorch Using the Bottleneck Utility

I hadn’t used the torch.utils.bottleneck tool for quite some time, so just for hoots I figured I’d see if anything had changed since my last use of the tool. The bottleneck tool analyzes a program run using the Python profiler … Continue reading

Posted in PyTorch | Leave a comment

PyTorch Model Training Using a Program-Defined Function vs. Inline Code

In most cases, I train a PyTorch model using inline code inside a main() function. For example: import torch as T . . . def main(): # 0. get started print(“Begin People predict politics type “) T.manual_seed(1) np.random.seed(1) # 1. … Continue reading

Posted in PyTorch | 1 Comment

An Example of PyTorch Hyperparameter Random Search

Bottom line: Hyperparameter random search can be effective but the difficult part is determining what to parameterize and the range of possible parameter values. When creating a neural network prediction model there are many architecture hyperparameters (number hidden layers, number … Continue reading

Posted in PyTorch | Leave a comment