Normalisation layer for tf.data.Dataset

Question:

I am trying to improve the Tensorflow tutorial on Time series forecasting. The code is quite long, but my doubt regards only a small part of it. In the tutorial the data is normalized is the usual way: it is demeaned and standardized using the mean and standard deviation of the train set.

train_mean = train_df.mean()
train_std = train_df.std()

train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std

Then a tf.data.Dataset is generated to feed the data to the algorithms:

def make_dataset(self, data):

  data = np.array(data, dtype=np.float32)
  ds = tf.keras.utils.timeseries_dataset_from_array(data=data, targets=None, sequence_length=self.total_window_size, sequence_stride=1, shuffle=True)
  ds = ds.map(self.split_window)

  return ds

This function is a method of a class that is too long to be reported here. What matters it that it returns tuples of inputs and labels:

for example_inputs, example_labels in my_class_instance.train.take(1):
  print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
  print(f'Labels shape (batch, time, features): {example_labels.shape}')

Returns:

Inputs shape (batch, time, features): (32, 6, 19) 
Labels shape (batch, time, features): (32, 1, 1)

The problem with this approach is that both the loss function and the metrics refer to standardized variables (including the target variable) rather than the actual values that we are trying to predict. To solve this problem, I would like to leave the features (and hence the target variable) undstandardized and instead introduce a feature normalization layer in the machine learning models. I thought of using something like this:

normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(np.array(train_features))
model.add(normalizer)

My question is: how can I add add such a normalization layer so that it standardizes only the features and not the labels?

I have already achieved a step, which is removing the batches from the dataset so that if I wanted to obtain the same result I would need to specify that I am batching:

for example_inputs, example_labels in my_class_instance.train.batch(32).take(1):
  print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
  print(f'Labels shape (batch, time, features): {example_labels.shape}')

Returns:

Inputs shape (batch, time, features): (32, 6, 19) 
Labels shape (batch, time, features): (32, 1, 1)
Asked By: NC520

||

Answers:

You should be able to do something like this:

normalizer = tf.keras.layers.Normalization(axis=-1)
normalizer.adapt(my_class_instance.train.map(lambda x, y: x))
model.add(normalizer)

where x represents your features and y your labels. And just as a reminder:

Calling adapt() on a Normalization layer is an alternative to passing
in mean and variance arguments during layer construction.

Answered By: AloneTogether