Pytorch: a similar process to reverse pooling and replicate padding?
Pytorch: a similar process to reverse pooling and replicate padding? Question: I have a tensor A that has shape (batch_size, width, height). Assume that it has these values: A = torch.tensor([[[0, 1], [1, 0]]]) I am also given a number K that is a positive integer. Let K=2 in this case. I want to do …