Turning variable into a torch Tensor. Afterwards tensor is empty / has no element

Question:

The following is my Code. The "sequences" are my training data in the form [139 rows x 4 columns], 0) where the 139×4 are my signals and the 0 is my encoded label.

def __getitem__(self, idx):
    sequence, label = self.sequences[idx]
  
    #converting sequence and label to tensors 
    sequence = torch.Tensor(sequence.to_numpy())
      
    print("label before tensor", label)
    label = torch.Tensor(label).long()
    print("numel() labels   :", label.numel())
    print("label shape    :", shape(label))
    return (sequence, label) 

The Code output is:

      >>label before tensor 0  (This is my encoded label)
      >>numel() labels   : 0
      >>label shape    : torch.Size([0])

Why is my label tensor empty?

Asked By: patrick823

||

Answers:

Because torch.Tensor expects either an array (in which case this array becomes the underlying values) or several ints which will be the size of the tensor. Hence torch.Tensor(0) instantiates a tensor of size 0.

Either you use torch.Tensor([0]) or torch.tensor(0). Why these two objects behave in a different manner I don’t know, but I’d recommend using the tensor (not capitalized) since it’s better documented (the Tensor one seems to be part of the C port)

edit : found this useful thread about their difference

Answered By: trialNerror