Column-dependent bounds in torch.clamp

Question:

I would like to do something similar to np.clip on PyTorch tensors on a 2D array. More specifically, I would like to clip each column in a specific range of value (column-dependent). For example, in numpy, you could do:

x = np.array([-1,10,3])
low = np.array([0,0,1])
high = np.array([2,5,4])
clipped_x = np.clip(x, low, high)

clipped_x == np.array([0,5,3]) # True

I found torch.clamp, but unfortunately it does not support multidimensional bounds (only one scalar value for the entire tensor). Is there a “neat” way to extend that function to my case?

Thanks!

Asked By: Ben

||

Answers:

Not as neat as np.clip, but you can use torch.max and torch.min:

In [1]: x
Out[1]:
tensor([[0.9752, 0.5587, 0.0972],
        [0.9534, 0.2731, 0.6953]])

Setting the lower and upper bound per column

l = torch.tensor([[0.2, 0.3, 0.]])
u = torch.tensor([[0.8, 1., 0.65]])

Note that the lower bound l and upper bound u are 1-by-3 tensors (2D with singleton dimension). We need these dimensions for l and u to be broadcastable to the shape of x.
Now we can clip using min and max:

clipped_x = torch.max(torch.min(x, u), l)

Resulting with

tensor([[0.8000, 0.5587, 0.0972],
        [0.8000, 0.3000, 0.6500]])
Answered By: Shai

For anyone, who is having the same problem like me a few minutes ago:

For about two years it is also possible to have column-dependent bounds in torch.clamp (see PR):

In: x = torch.randn(2, 3)
        print(x)

Out: tensor([[-0.2069, 1.4082, 0.2615],
             [0.6478, 0.0883, -0.7795]])

Setting a lower and upper bound:

lower = torch.Tensor([[-1., 0., 0.]])
upper = torch.Tensor([[0., 1., 1.]])

Now you can simply use torch.clamp as follows:

In: clamped_x = torch.clamp(x, min=lower, max=upper)
    print(clamped_x)

Out: tensor([[-0.2069, 1.0000, 0.2615],
             [0.0000, 0.0883, 0.0000]])

I hope that helps 🙂

Answered By: Simon