How to freeze param when I use transfer learning in python-pytorch

Question:

I want to learn only the first layer by transfer learning and fix(freeze) the parameters of the other layers.

but I was required to **requires_grad = True **.
How can I solve this problem?
The following is a description of what we tried and the errors we encountered.

from efficientnet_pytorch import EfficientNet

model_b0 = EfficientNet.from_pretrained('efficientnet-b0')
num_ftrs = model_b0._fc.in_features
model_b0._fc = nn.Linear(num_ftrs, 10)

for param in model_b0.parameters():
    param.requires_grad = False

last_layer = list(model_b0.children())[-1]

print(f'except last layer: {last_layer}')
for param in last_layer.parameters():
    param.requires_grad = True



criterion = nn.CrossEntropyLoss()
optimizer_ft = optim.SGD(model_b0.parameters(), lr=0.001, momentum=0.9)
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)

model_b0 = train_model(model_b0, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=3)

if I change requires_grad = True , above code can run.

and error is

      4 optimizer_ft = optim.SGD(model_b7.parameters(), lr=0.001, momentum=0.9)
      5 exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
----> 7 model_b0 = train_model(model_b7, criterion, optimizer_ft, exp_lr_scheduler, num_epochs=15)

Cell In [69], line 43, in train_model(model, criterion, optimizer, scheduler, num_epochs)
     41 loss = criterion(outputs, labels)
---> 43 loss.backward()
     44 optimizer.step()

site-packagestorch_tensor.py:396, in Tensor.backward(self, gradient, retain_graph, create_graph, inputs)
    394         create_graph=create_graph,
    395         inputs=inputs)
--> 396 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)

site-packagestorchautograd__init__.py:173, in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    172 # calls in the traceback and some print out the last line
--> 173 Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
    174     tensors, grad_tensors_, retain_graph, create_graph, inputs,

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

thank you for read!

Asked By: log moment

||

Answers:

There are several possible causes of this problem:

  1. The input:
    RuntimeError: element 0 of variables does not require grad and does not have a grad_fn

The tensor you passed in does not have the requires_grad=True.

Make sure your new Variable is with requires_grad = True:

var_xs_h = Variable(xs_h.data, requires_grad=True)

  1. The requires_grad: Freeze last layers of the model

As mentioned by the Pytorch forum moderator ptrblck:

If you are setting requires_grad = False for all parameters, the error
message is expected, as Autograd won’t be able to calculate any
gradients, since no parameter requires them.

Which I think your case is similar to the latter one, you can read the second post.

Another suggestion by ptrblck in debugging.

# standard use case
x = torch.randn(1, 1)
print(x.requires_grad)
# > False

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08c5610>
out.backward()
print(lin.weight.grad)
# > tensor([[-0.9785]])
print(x.grad)
# > None

# input requires grad
x = torch.randn(1, 1, requires_grad=True)
print(x.requires_grad)
# > True

lin = nn.Linear(1, 1)
out = lin(x)
print(out.grad_fn)
# > <AddmmBackward0 object at 0x7fcea08d4640>
out.backward()
print(lin.weight.grad)
# > tensor([[1.6739]])
print(x.grad)
# >tensor([[0.0300]])
Answered By: Angus Tay
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.