A loaded Keras model with a custom layer has different weights to model which was saved

Question:

I have implemented a Transformer encoder in Keras using the template provided by Francois Chollet here. After I train the model, I save it using model.save, but when I load it again for inference I find that the weights seem to be random again, and therefore my model loses all inference ability.

I have looked at similar issues on Stack Overflow and GitHub, and applied the following suggestions, but I am still getting the same issue:

  1. Use the @tf.keras.utils.register_keras_serializable() decorator on the class.
  2. Make sure **kwargs is in the init call
  3. Make sure the custom layer has get_config and from_config methods.
  4. Use custom_object_scope to load model.

Below is a minimally reproducible example to replicate the issue. How do I change it so that the model weights save correctly?

import numpy as np
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
from keras.models import load_model
from keras.utils import custom_object_scope

@tf.keras.utils.register_keras_serializable()
class TransformerEncoder(layers.Layer):
    def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
        super().__init__(**kwargs)
        self.embed_dim = embed_dim
        self.dense_dim = dense_dim
        self.num_heads = num_heads
        self.attention = layers.MultiHeadAttention(
            num_heads=num_heads, key_dim=embed_dim)
        self.dense_proj = keras.Sequential(
            [
                layers.Dense(dense_dim, activation="relu"),
                layers.Dense(embed_dim),
            ]
        )
        self.layernorm_1 = layers.LayerNormalization()
        self.layernorm_2 = layers.LayerNormalization()

    def call(self, inputs, mask=None):
        if mask is not None:
            mask = mask[:, tf.newaxis, :]
        attention_output = self.attention(
            inputs, inputs, attention_mask=mask)
        proj_input = self.layernorm_1(inputs + attention_output)
        proj_output = self.dense_proj(proj_input)
        return self.layernorm_2(proj_input + proj_output)

    def get_config(self):
        config = super().get_config()
        config.update({
            "embed_dim": self.embed_dim,
            "num_heads": self.num_heads,
            "dense_dim": self.dense_dim,
        })
        return config

    @classmethod
    def from_config(cls, config):
        return cls(**config)


# Create simple model:
encoder = TransformerEncoder(embed_dim=2, dense_dim=2, num_heads=1)
inputs = keras.Input(shape=(2, 2), batch_size=None, name="test_inputs")
x = encoder(inputs)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="linear")(x)
model = keras.Model(inputs, outputs)

# Fit the model and save it:
np.random.seed(42)
X = np.random.rand(10, 2, 2)
y = np.ones(10)
model.compile(optimizer=keras.optimizers.Adam(), loss="mean_squared_error")
model.fit(X, y, epochs=2, batch_size=1)
model.save("./test_model")

# Load the saved model:
with custom_object_scope({
    'TransformerEncoder': TransformerEncoder
}):
    loaded_model = load_model("./test_model")

print(model.weights[0].numpy())
print(loaded_model.weights[0].numpy())
Asked By: Toby Petty

||

Answers:

The secret is how it works. You can try it with the model.get_weights(), but I sample in the layer.get_weight(). That is because it is easily seen.

Sample: A custom layer with random initial values results in a small of randoms number changed when it is run a couple of times.

import tensorflow as tf

class MyDenseLayer(tf.keras.layers.Layer):
    def __init__(self, num_outputs):
        super(MyDenseLayer, self).__init__()
        self.num_outputs = num_outputs

    def build(self, input_shape):
        """ initialize weights with randomize numbers """
        min_size_init = tf.keras.initializers.RandomUniform(minval=1, maxval=5, seed=None)
        self.kernel = self.add_weight(shape=[int(input_shape[-1]), self.num_outputs],
        initializer = min_size_init, trainable=True)

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)


start = 3
limit = 33
delta = 3

# Create DATA
sample = tf.range(start, limit, delta)
sample = tf.cast( sample, dtype=tf.float32 )

# Initail, ( 10, 1 )
sample = tf.constant( sample, shape=( 10, 1 ) )
layer = MyDenseLayer(10)
data = layer(sample)

Output: The same layer initialized continues of the call() process

### 1st round ###
# [array([[-0.07862139, -0.45416605, -0.53606   ,  0.18597281,  0.2919714 ,
        # -0.27334914,  0.60890776, -0.3856985 ,  0.58052486, -0.5634572 ]], dtype=float32)]

### 2nd round ###
# [array([[ 0.5949032 ,  0.05113244, -0.51997787,  0.26252705, -0.09235346,
        # -0.35243294, -0.0187515 , -0.12527376,  0.22348166,  0.37051445]], dtype=float32)]

### 3rd round ###
# [array([[-0.6654639 , -0.46027896, -0.48666477, -0.23095328,  0.30391783,
         # 0.21867174, -0.5405392 , -0.45399982, -0.22143698,  0.66893476]], dtype=float32)]

Sample: Re-called every time telling the layer to reset the initial value.

layer.build([1])
print( data )
print( layer.get_weights() )

Output: The model.call() result in differnt not continues.

### 1st round ###
# [array([[ 0.73738164,  0.14095825, -0.5416008 , -0.35084447, -0.35209572,
        # -0.35504425,  0.1692887 ,  0.2611189 ,  0.43355125, -0.3325353 ]], dtype=float32)]

### 2nd round ###
# [array([[ 0.5949032 ,  0.05113244, -0.51997787,  0.26252705, -0.09235346,
        # -0.35243294, -0.0187515 , -0.12527376,  0.22348166,  0.37051445]], dtype=float32)]

### 3rd round ###
# [array([[-0.6654639 , -0.46027896, -0.48666477, -0.23095328,  0.30391783,
         # 0.21867174, -0.5405392 , -0.45399982, -0.22143698,  0.66893476]], dtype=float32)]

Sample: We included layer-initialized values requirements, supposed to start at the same initial values for all actions.

""" initialize weights with values ones """
        min_size_init = tf.keras.initializers.Ones()

Output: The same results are reproduced every time.

### 1st round ###
# tf.Tensor(
# [[ 3.  3.  3.  3.  3.  3.  3.  3.  3.  3.]
 # [ 6.  6.  6.  6.  6.  6.  6.  6.  6.  6.]
 # [ 9.  9.  9.  9.  9.  9.  9.  9.  9.  9.]
 # [12. 12. 12. 12. 12. 12. 12. 12. 12. 12.]
 # [15. 15. 15. 15. 15. 15. 15. 15. 15. 15.]
 # [18. 18. 18. 18. 18. 18. 18. 18. 18. 18.]
 # [21. 21. 21. 21. 21. 21. 21. 21. 21. 21.]
 # [24. 24. 24. 24. 24. 24. 24. 24. 24. 24.]
 # [27. 27. 27. 27. 27. 27. 27. 27. 27. 27.]
 # [30. 30. 30. 30. 30. 30. 30. 30. 30. 30.]], shape=(10, 10), dtype=float32)
# [array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)]

### 2nd round ###
# tf.Tensor(
# [[ 3.  3.  3.  3.  3.  3.  3.  3.  3.  3.]
 # [ 6.  6.  6.  6.  6.  6.  6.  6.  6.  6.]
 # [ 9.  9.  9.  9.  9.  9.  9.  9.  9.  9.]
 # [12. 12. 12. 12. 12. 12. 12. 12. 12. 12.]
 # [15. 15. 15. 15. 15. 15. 15. 15. 15. 15.]
 # [18. 18. 18. 18. 18. 18. 18. 18. 18. 18.]
 # [21. 21. 21. 21. 21. 21. 21. 21. 21. 21.]
 # [24. 24. 24. 24. 24. 24. 24. 24. 24. 24.]
 # [27. 27. 27. 27. 27. 27. 27. 27. 27. 27.]
 # [30. 30. 30. 30. 30. 30. 30. 30. 30. 30.]], shape=(10, 10), dtype=float32)
# [array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)]

Sample: Implementation

temp = tf.random.normal([10], 1, 0.2, tf.float32)
temp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])
temp = tf.nn.softmax(temp)
action = int(np.argmax(temp))

Output: All variables are co-variances of environment variables. It selects the max() or min() value mapped to target actions in the game. I added some random value that does not win the filters times value create of actions feedbacks.

Answered By: Jirayu Kaewprateep

The weights are saved (you can load them with load_weights after loading the model). The problem is that you create new layers in __init__. You need to recreate them from their config, for example:

class TransformerEncoder(layers.Layer):
    def __init__(self, embed_dim, dense_dim, num_heads, attention_config=None, dense_proj_config=None, **kwargs):
        super().__init__(**kwargs)
        self.embed_dim = embed_dim
        self.dense_dim = dense_dim
        self.num_heads = num_heads
        self.attention = layers.MultiHeadAttention(
            num_heads=num_heads, key_dim=embed_dim) 
            if attention_config is None else layers.MultiHeadAttention.from_config(attention_config)
        self.dense_proj = keras.Sequential(
            [
                layers.Dense(dense_dim, activation="relu"),
                layers.Dense(embed_dim),
            ]
        ) if dense_proj_config is None else keras.Sequential.from_config(dense_proj_config)
        ...

    def call(self, inputs, mask=None):
        ...

    def get_config(self):
        config = super().get_config()
        config.update({
            "embed_dim": self.embed_dim,
            "num_heads": self.num_heads,
            "dense_dim": self.dense_dim,
            "attention_config": self.attention.get_config(),
            "dense_proj_config": self.dense_proj.get_config(),
        })
        return config

Output:

[[[-0.810745   -0.14727005]]

[[ 0.8542909   0.09689581]]]
[[[-0.810745   -0.14727005]]

[[ 0.8542909   0.09689581]]]
Answered By: AndrzejO
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.