TensorFlow results are not reproducible despite using tf.random.set_seed

Question:

According to a tutorial on Tensorflow I am following, the following code is supposed to give reproducible results, so one can check if the exercise is done correctly. Tensorflow version is 2.11.0.

import tensorflow as tf
import numpy as np

class MyDenseLayer(tf.keras.layers.Layer):
    def __init__(self, n_output_nodes):
        super(MyDenseLayer, self).__init__()
        self.n_output_nodes = n_output_nodes

    def build(self, input_shape):
        d = int(input_shape[-1])
        # Define and initialize parameters: a weight matrix W and bias b
        # Note that parameter initialization is random!
        self.W = self.add_weight("weight", shape=[d, self.n_output_nodes]) # note the dimensionality
        self.b = self.add_weight("bias", shape=[1, self.n_output_nodes]) # note the dimensionality
        print("Weight matrix is {}".format(self.W))
        print("Bias vector is {}".format(self.b))

    def call(self, x):
        z = tf.add(tf.matmul(x, self.W), self.b)
        y = tf.sigmoid(z)
        return y

# Since layer parameters are initialized randomly, we will set a random seed for reproducibility
tf.random.set_seed(1)
layer = MyDenseLayer(3)
layer.build((1,2))
print(layer.call(tf.constant([[1.0,2.0]], tf.float32, shape=(1,2))))

However, results are different on each run as the weight and bias values in build get different values every time. It seems to me that tf.random.set_seed has no effect at all, at least it is not generating the reproducible results it should.

Full output of two runs:

treuss@foo:~/python/tensorflow$ python lab1_1.py 
2023-03-26 21:31:16.896212: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:18.021094: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023462: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023634: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.023976: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:18.024324: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.024471: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.024617: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.455771: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.455946: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.456125: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:18.456257: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4656 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Weight matrix is <tf.Variable 'weight:0' shape=(2, 3) dtype=float32, numpy=
array([[ 0.9970403 , -0.672126  , -0.00545013],
       [ 0.5411365 , -0.8570848 ,  0.5970814 ]], dtype=float32)>
Bias vector is <tf.Variable 'bias:0' shape=(1, 3) dtype=float32, numpy=array([[-0.9100063,  0.7671951, -0.9659226]], dtype=float32)>
tf.Tensor([[0.7630197  0.16532896 0.5554683 ]], shape=(1, 3), dtype=float32)
treuss@foo:~/python/tensorflow$ python lab1_1.py 
2023-03-26 21:31:21.245548: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:22.372605: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375021: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375175: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375521: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-26 21:31:22.375840: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.375960: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.376067: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.801768: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.801935: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.802112: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:981] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2023-03-26 21:31:22.802213: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1613] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4656 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
Weight matrix is <tf.Variable 'weight:0' shape=(2, 3) dtype=float32, numpy=
array([[ 0.72208846,  0.34211397,  0.04753423],
       [ 0.48018157,  0.9557345 , -0.19968122]], dtype=float32)>
Bias vector is <tf.Variable 'bias:0' shape=(1, 3) dtype=float32, numpy=array([[ 0.31122065, -0.81101143, -0.7763765 ]], dtype=float32)>
tf.Tensor([[0.8801311  0.80885255 0.24449256]], shape=(1, 3), dtype=float32)
Asked By: treuss

||

Answers:

You see this behaviour because after TF 2.7, Keras switched to tf.random.uniform for the tf.keras.initializers.xxx, and glorot_uniform is used in self.add_weight by default.

Long story short, the best thing you can do is to set seeds using tf.keras.utils.set_seed() or use any version below 2.7.0 and set it using tf.random.set_seed().

There was a related discussion about this here.

Answered By: Frightera
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.