Setting torch.nn.linear() diagonal elements zero

Question:

I am trying to build a model with a layer of torch.nn.linear with same input size and output size, so the layer would be square matrix. In this model, I want the diagonal elements of this matrix fixed to zero. Which means, during training, I don’t want the diagonal elements to be changed from zero. I could only think of adding some kind of step that change diag elements to zero for each training epoch, but I am not sure if this is valid or efficient way. Is there a definite way of making this kind of layer which can ensure that the diagonal elements don’t change?

Sorry if my question is weird.

Asked By: esh3390

||

Answers:

You can always implement your own layers. Note that all custom layers should be implemented as classes derived from nn.Module. For example:

class LinearWithZeroDiagonal(nn.Module):
  def __init__(self, num_features, bias):
    super(LinearWithZeroDiagonal, self).__init__()
    self.base_linear_layer = nn.Linear(num_features, num_features, bias)
    
  def forward(self, x):
    # first, make sure the diagonal is zero
    with torch.no_grad():
      self.base_linear_layer.weight.fill_diagonal_(0.)
    return self.base_linear_layer(x)
Answered By: Shai
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.