pytorch

How to create a high dimension tensor with fixed shape and dtype?

How to create a high dimension tensor with fixed shape and dtype? Question: I want to return a tensor with fixed shape size, like, torch.Size([1,345]) However, when I input, import torch pt1 = torch.tensor(data=(1, 345), dtype=torch.int64) it only return torch.Size([2]) I followed some tensor tutorial a tried to pt1 = torch.tensor(1, 345, dtype=torch.int64) pt1 = …

Total answers: 3

Slice a multidimensional pytorch tensor based on values in other tensors

Slice a multidimensional pytorch tensor based on values in other tensors Question: I have 4 PyTorch tensors: data of shape (l, m, n) a of shape (k,) and datatype long b of shape (k,) and datatype long c of shape (k,) and datatype long I want to slice the tensor data such that it picks …

Total answers: 2

Torchserve custom handler – how to pass a list of tensors for batch inferencing

Torchserve custom handler – how to pass a list of tensors for batch inferencing Question: I am trying to create a custom handler in torchserve and want to also use torchserve’s batch capability for parallelism for optimum use of resources. I am not able to find out how to write custom handler for this inference. …

Total answers: 1

pytorch I don't know how to define multiple models

pytorch I don't know how to define multiple models Question: I want to use two different models in pytorch. Therefore, I executed the following code, but I cannot successfully run the second model. How can I do this? class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.linear1 = nn.Linear(2, 64) self.linear2 = nn.Linear(64, 3) def forward(self, x): …

Total answers: 1

How to make sure timerseriesAI/tsai uses GPU

How to make sure timerseriesAI/tsai uses GPU Question: I am using tsai 0.3.5 for timeseries classification. But it is taking unusual time for training an epoch. Can somebody please let me know how to make sure that tsai uses GPU and not CPU. Please find below my code. import os os.chdir(os.path.dirname(os.path.abspath(__file__))) from pickle import load …

Total answers: 1

Can i initialize optimizer before changing the last layer of my model

Can i initialize optimizer before changing the last layer of my model Question: Say I want to change the last layer of my code but my optimizer is defined on the top of my scripts, what is a better practise? batch_size = 8 learning_rate = 2e-4 num_epochs = 100 cnn = models.resnet18(weights=’DEFAULT’) loss_func = nn.CrossEntropyLoss() …

Total answers: 1

Loading Hugging face model is taking too much memory

Loading Hugging face model is taking too much memory Question: I am trying to load a large Hugging face model with code like below: model_from_disc = AutoModelForCausalLM.from_pretrained(path_to_model) tokenizer_from_disc = AutoTokenizer.from_pretrained(path_to_model) generator = pipeline("text-generation", model=model_from_disc, tokenizer=tokenizer_from_disc) The program is quickly crashing after the first line because it is running out of memory. Is there a way …

Total answers: 1

PyTorch custom transformation with additional argument in __call__

PyTorch custom transformation with additional argument in __call__ Question: I have a custom dataset that I want to train a neural network on. A sample of the dataset might be [1,2,3,4] and the corresponding time axis is then for example [0, 0.2, 0.4, 0.6]. This time axis is different for every sample in the dataset …

Total answers: 1

Why is the result of matrix multiply in torch so different when i roll the matrix?

Why is the result of matrix multiply in torch so different when i roll the matrix? Question: Although there is a problem with the accuracy of floating-point multiplication, the gap is slightly larger. And it is also related to the roll step. x = torch.rand((1, 5)) y = torch.rand((5, 1)) print("%.10f"%torch.matmul(x,y)) >>> 1.2710412741 print("%.10f"%torch.matmul(torch.roll(x, 1, …

Total answers: 2