Looks like keras Normalization layer doesn't denormalize properly

Question:

I want to use keras Normalization layer to "denormalize" my output.
The doc for this object says the argument "invert=True" does exactly that, but it doesn’t behave as I thought at all…

I tried to isolate the problem and show that it doesn’t compute the inverse of the normalization

import numpy as np
import tensorflow as tf
from tensorflow import keras
from keras import layers

norm = layers.Normalization()
denorm = layers.Normalization(invert=True)
y = np.array([[10.0], 
              [20.0], 
              [30.0]])
norm.adapt(y)
denorm.adapt(y)

Here I checked the mean and variance and it looks like it is the same for both, all good for now.

print(norm(20))
print(denorm(0))

I get as output 0 and 163.29932 instead of 0 and 20…
It looks like the denormalization adds the mean and then multiply by std instead of multiplying by std first.

The keras version is probably relevant here :

print(keras.__version__)

Output : ‘2.10.0’

Asked By: Vincent

||

Answers:

I looked in the code and found that it was indeed an error (adding the mean and then multiplying by the standard deviation).

I opened a pull request to fix this problem and it has been accepted, so it should be fixed in a short while.

Answered By: Vincent
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.