The longest journey begins with a single step. I’ve set out on a journey to master PyTorch, one of the major neural network libraries. My first step after installing a CPU-only version of PyTorch on Windows, was to look at the most basic PyTorch data structure, a Tensor.
After plodding through the PyTorch documentation, I believe there are three equivalent ways to create an uninitialized Tensor object (i.e., one with junk values). A Tensor is essentially a NumPy vector (single or multidimensional array) that can be handled by a GPU processor.
I know from previous experience that creating an uninitialized Tensor isn’t going to be a common coding pattern, but I also know that it’s a mistake to skip over the fundamentals.
Briefly, if the torch module is aliased as T then T.empty() and T.FloatTensor() and T.Tensor() can create an uninitialized Tensor.
The FloatTensor() constructor creates a Tensor with junk values of type float32, the default numeric type. Using empty() or Tensor() you can specify the data type to use.
At this point I’m not entirely sure if there are, as I suspect, special types that are applicable to GPU (recall I’m using a regular CPU machine).
The moral of my story, to myself, is that PyTorch operates at a very low level and effectively has its own type system that must be learned. An analogy would be learning a new spoken language that has a different alphabet that must be learned.