RuntimeError: Unable to create link (name already exists) Keras

Question:

When I save my model I get the following error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-40-853303da8647> in <module>()
      7 
      8 
----> 9 model.save(outdir+'model.h5')
     10 
     11 
5 frames
/usr/local/lib/python3.6/dist-packages/h5py/_hl/group.py in __setitem__(self, name, obj)
    371 
    372             if isinstance(obj, HLObject):
--> 373                 h5o.link(obj.id, self.id, name, lcpl=lcpl, lapl=self._lapl)
    374 
    375             elif isinstance(obj, SoftLink):

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/_objects.pyx in h5py._objects.with_phil.wrapper()

h5py/h5o.pyx in h5py.h5o.link()

RuntimeError: Unable to create link (name already exists)

This does not happen when I use built-in layers to build my model or others user defined layers. This error arises only when I use this particular user defined layer:

class MergeTwo(keras.layers.Layer):

def __init__(self, nout, **kwargs):
    super(MergeTwo, self).__init__(**kwargs)
    self.nout = nout


    self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros',
                             trainable=True)

    self.beta = self.add_weight(shape=(self.nout,), initializer='zeros',
                             trainable=True)

def call(self, inputs):
    A, B = inputs
    result = keras.layers.add([self.alpha*A ,self.beta*B])
    result = keras.activations.tanh(result)
    return result


def get_config(self):
    config = super(MergeTwo, self).get_config()
    config['nout'] = self.nout
    return config

I read the Docs but nothing worked, I cannot figure out why.
I am using Google Colab and Tensorflow version 2.2.0

Asked By: Drugo

||

Answers:

I think the problem is that both of your weight variables have internally the same name, which should not happen, you can give them names with the name parameter to add_weight:

self.alpha = self.add_weight(shape=(self.nout,), initializer='zeros',
                         trainable=True, name="alpha")

self.beta = self.add_weight(shape=(self.nout,), initializer='zeros',
                         trainable=True, name="beta")

This should workaround the problem.

Answered By: Dr. Snoopy

I found another solution, although from a different scenario.
I was using Keras-tuner to do some hyper parameter tuning, and when building (E.G 4 models), the layers of the models would have the same name, for each of the models.
As i was testing depth of the network as a parameter, i would have multiple lstm,lstm_1, and dense layers in my group. An example of a single model is shown below for context.

Layer (type)                 Output Shape              Param# 
=================================================================
lstm (LSTM)                  (None, 12, 320)           536320
_________________________________________________________________
lstm_1 (LSTM)                (None, 12, 64)            98560
_________________________________________________________________
dense (Dense)                (None, 12, 1)             65

I found that changing the name of the layer using the name parameter, so something unique, meant that i wouldn’t get this error.
Example for illustration purposes:

unique_id = random.randint(1,99999999) # quite unlike 
model.add(LSTM(64, name="my_layer_name_{}".format(unique_id))) 

When adding the unique_id, onto the custom name, i ensured that each layer for each model in my keras-tuner group would have unique layers.

In your case, you have a single model, however, as the layers are custom, i’m not quite sure how i would name them (given the current format they have).
Can you check if this is the case ?

Answered By: Emil

Addition to the accepted answer

You may encounter the exact same error even without custom defined weights or layers. In jupyter notebooks, after re-compiling a loaded checkpoint, it can happen, depending on the optimizer, that the optimizers weights has duplicate names. To get rid of the bad state without losing your trained model:

model.save("model.h5", include_optimizer=False)
# Restart kernel
model = load_model("model.h5")
model.compile(**args)

Leaving this here not as strict answer to the question but as a broader hint. Merely, because this thread was the first that popped up when i searched for the problem.

Answered By: Chad Broski
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.