Pytorch – RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed
Question:
I keep running into this error:
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
I had searched in Pytorch forum, but still can’t find out what I have done wrong in my custom loss function. My model is nn.GRU, and here is my custom loss function:
def _loss(outputs, session, items): # `items` is a dict() contains embedding of all items
def f(output, target):
pos = torch.from_numpy(np.array([items[target["click"]]])).float()
neg = torch.from_numpy(np.array([items[idx] for idx in target["suggest_list"] if idx != target["click"]])).float()
if USE_CUDA:
pos, neg = pos.cuda(), neg.cuda()
pos, neg = Variable(pos), Variable(neg)
pos = F.cosine_similarity(output, pos)
if neg.size()[0] == 0:
return torch.mean(F.logsigmoid(pos))
neg = F.cosine_similarity(output.expand_as(neg), neg)
return torch.mean(F.logsigmoid(pos  neg))
loss = map(f, outputs, session)
return torch.mean(torch.cat(loss))
Training code:
# zero the parameter gradients
model.zero_grad()
# forward + backward + optimize
outputs, hidden = model(inputs, hidden)
loss = _loss(outputs, session, items)
acc_loss += loss.data[0]
loss.backward()
# Add parameters' gradients to their values, multiplied by learning rate
for p in model.parameters():
p.data.add_(learning_rate, p.grad.data)
Answers:
The problem is from my training loop: it doesn’t detach or repackage the hidden state in between batches? If so, then loss.backward()
is trying to backpropagate all the way through to the start of time, which works for the first batch but not for the second because the graph for the first batch has been discarded.
there are two possible solutions.

detach/repackage the hidden state in between batches. There are (at
least) three ways to do this (and I chose this solution):hidden.detach_()
(or equivalently hidden = hidden.detach()).

replace loss.backward() with
loss.backward(retain_graph=True)
but know that each successive batch will take more time than the previous one because it will have to backpropagate all the way through to the start of the first batch.
I had this error too. I was feeding in the same tensor as input mid way in my model sometimes. By calling ‘.detach()’ on that tensor it got rid of the error.
That tensor wasn’t what I was training on, and I didn’t want grad on it. Calling detach takes it off the graph so it isn’t considered by pytorch ‘backward()’.