How to add parameters in module class in pytorch custom model?

Question:

I tried to find the answer but I can’t.

I make a custom deep learning model using pytorch. For example,

class Net(nn.Module):
    def __init__(self):
        super(Net, self).__init__()

        self.nn_layers = nn.ModuleList()
        self.layer = nn.Linear(2,3).double()
        torch.nn.init.xavier_normal_(self.layer.weight)

        self.bias = torch.nn.Parameter(torch.randn(3))

        self.nn_layers.append(self.layer)

    def forward(self, x):
        activation = torch.tanh
        output = activation(self.layer(x)) + self.bias

        return output

If I print

model = Net()
print(list(model.parameters()))

it does not contains model.bias, so
optimizer = optimizer.Adam(model.parameters()) does not update model.bias.
How can I go through this?
Thanks!

Asked By: CSH

||

Answers:

You need to register your parameters:

self.register_parameter(name='bias', param=torch.nn.Parameter(torch.randn(3)))

Update:
In more recent versions of PyTorch, you no longer need to explicitly register_parameter, it’s enough to set a member of your nn.Module with nn.Parameter to "notify" pytorch that this variable should be treated as a trainable parameter:

self.bias = torch.nn.Parameter(torch.randn(3))

Please note that is you want to have more complex data structures of parameters (e.g., lists, etc.) you should use dedicated containers like torch.nn.ParameterList or torch.nn.ParameterDict.

Answered By: Shai
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.