Assigning Fixed Weight and Bias Values to a PyTorch Neural Network

Sometimes it’s useful to assign fixed weight and bias values to a neural network. To do so requires a knowledge of how those values are stored.

I wrote a short demo program to illustrate the technique. The demo creates a 3-4-2 neural network. The single hidden layer is named hid1 and has a total of 3 x 4 = 12 weights and 4 biases. PyTorch sores the weight values in a 4×3 shaped matrix named The biases values are stored in

Similarly, the output layer is named oupt and has a total of 4 x 2 = 8 weights and 2 biases. They’re stored in a 2×4 shaped matrix named and

The demo code iterates through the weights and biases and stores 0.01, 0.02, . . 0.26 into the network.

The diagram above shows the conceptual view of the neural network, and a representation of the weight and bias data structures.

Important note: My demo sets the values of the weights and biases in the Net class __init__() method. If you want to modify the weights and biases of a neural network after the network has been instantiated, you need to do so inside a torch.no_grad() block so that the gradients don’t get messed up.

Software system conceptual diagrams are a facade over the reality and complexity of the underlying code. Here are three remarkable building facades that give a flat wall the appearance of 3D complexity.

Demo code.


# PyTorch 1.10.0-CPU Anaconda3-2020.02  Python 3.7.6
# Windows 10 

import torch as T
device = T.device("cpu")  # apply to Tensor or Module

class Net(T.nn.Module):
  def __init__(self):
    super(Net, self).__init__()
    self.hid1 = T.nn.Linear(3, 4)  # 3-4-2
    self.oupt = T.nn.Linear(4, 2)

    v = 0.01

    for i in range(4):      # hid1 4x3
      for j in range(3):[i][j] = v
        v += 0.01
    for i in range(4):[i] = v
      v += 0.01

    for i in range(2):      # oupt 2x4
      for j in range(4):[i][j] = v
        v += 0.01
    for i in range(2):[i] = v
      v += 0.01

  def forward(self, x):
    z = T.tanh(self.hid1(x))
    z = self.oupt(z)  # no softmax for CrossEntropyLoss() 
    return z

def main():
  print("\nBegin ")

  print("\nCreating a 3-4-2 network with fixed wts and biases ")
  net = Net().to(device)

  print("\nhid1 wts and biases: ")

  print("\noupt wts and biases: ")

  print("\nEnd ")

if __name__ == "__main__":
This entry was posted in PyTorch. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s