Keras early stopping callback error, val_loss metric not available

Question:

I am training a Keras (Tensorflow backend, Python, on MacBook) and am getting an error in the early stopping callback in fit_generator function. The error is as follows:

RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are:
  (self.monitor, ','.join(list(logs.keys()))),
RuntimeWarning: Can save best model only with val_acc available, skipping.

'skipping.' % (self.monitor), RuntimeWarning
[local-dir]/lib/python3.6/site-packages/keras/callbacks.py:497: RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are:
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning
[local-dir]/lib/python3.6/site-packages/keras/callbacks.py:406: RuntimeWarning: Can save best model only with val_acc available, skipping.
  'skipping.' % (self.monitor), RuntimeWarning)
Traceback (most recent call last):
  :
  [my-code]
  :
  File "[local-dir]/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
  File "[local-dir]/lib/python3.6/site-packages/keras/engine/training.py", line 2213, in fit_generator
callbacks.on_epoch_end(epoch, epoch_logs)
  File "[local-dir]/lib/python3.6/site-packages/keras/callbacks.py", line 76, in on_epoch_end
callback.on_epoch_end(epoch, logs)
  File "[local-dir]/lib/python3.6/site-packages/keras/callbacks.py", line 310, in on_epoch_end
self.progbar.update(self.seen, self.log_values, force=True)
AttributeError: 'ProgbarLogger' object has no attribute 'log_values'

My code is as follows (which looks OK):

:
ES = EarlyStopping(monitor="val_loss", min_delta=0.001, patience=3, mode="min", verbose=1)
:
self.model.fit_generator(
        generator        = train_batch,
        validation_data  = valid_batch,
        validation_steps = validation_steps,
        steps_per_epoch  = steps_per_epoch,
        epochs           = epochs,
        callbacks        = [ES],
        verbose          = 1,
        workers          = 3,
        max_queue_size   = 8)

The error message appears to relate to the early stopping callback but the callback looks OK. Also the error states that the val_loss is not appropriate, but I am not sure why… one more unusual thing about this is that the error only occurs when I use smaller data sets.

Any help is appreciated.

Asked By: Eric Broda

||

Answers:

If the error only occurs when you use smaller datasets, you’re very likely using datasets small enough to not have a single sample in the validation set.

Thus it cannot calculate a validation loss.

Answered By: Daniel Möller

I up-voted the previous answer as it gave me the insight to verify the data and inputs to the fit_generator function and find out what the root cause of the issue actually was. In summary, in cases where my dataset was small, I calculated validation_steps and steps_per_epoch which turned out to be zero (0) which caused the error.

I suppose the better longer-term answer, perhaps for the Keras team, is to cause an error/exception in fit_generator when these values are zero, which would probably lead to a better understanding about how to address this issue.

Answered By: Eric Broda

This error is occur’s due to the smaller dataset,to resolve this,increase the train times and split the train set in 80:20.

Answered By: Denny Prakash

The error occurs to us because we forgot to set validation_data in fit() method, while used 'callbacks': [keras.callbacks.EarlyStopping(monitor='val_loss', patience=1)],

Code causing error is:

self.model.fit(
        x=x_train,
        y=y_train,
        callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss', patience=1)],
        verbose=True)

Adding validation_data=(self.x_validate, self.y_validate), in fit() fixed:

self.model.fit(
        x=x_train,
        y=y_train,
        callbacks=[keras.callbacks.EarlyStopping(monitor='val_loss', patience=1)],
        validation_data=(x_validate, y_validate),
        verbose=True)
Answered By: menrfa

I got this warning too. It appeared after [switching to the master branch of Keras 2.2.4 to get validation_freq functionality enabled][1]:

//anaconda3/lib/python3.7/site-packages/keras/callbacks/callbacks.py:846: RuntimeWarning: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,accuracy
  (self.monitor, ','.join(list(logs.keys()))), RuntimeWarning

However, despite of the warning, early stopping on val_loss still works (at least for me). For example, this is the output I received when the computation early stopped:

Epoch 00076: early stopping

Before that Keras update, early stopping worked on val_loss with no warning.

Don’t ask me why it works because I haven’t a clue.

(You can try this behavior with a small example that you know it should early stop).

Answered By: Luca Urbinati

I got this error message using fit_generator. The error appeared after the first epoch had finished.

The problem was that I had set validation_freq=20 in fit_generator parameters.

Keras executes the callbacks list at the end of the first epoch, but it didn’t actually calculate val_loss until after epoch 20, so val_loss was not (yet) available.

Setting validation_freq=1 fixed the problem.

Answered By: longdragon

My problem was that I called these Callbacks with the parameter "val_acc".
The right parameter is "val_accuracy".

This solution was in my Error messages in the sentence: "Available metrics are: …"

Answered By: Mare Seestern

using tf.compat.v1.disable_eager_execution() will solved the problem. Trying validation_freq = 1 is also a good idea. However, you have to wait for the script terminal output for each epoch complete
Like this result
Therefore, I recommend to observe the result by tensorboard, weight&bias, etc,…

Answered By: dtlam26

Try to avoid using tf.keras when importing dependencies. It works for me when I directly use Keras (e.g., to import layers and callbacks).

Answered By: Fuad Numan

you should set the monitor parameter for early_stop.
default value of monitor parameter for early stop is "val_loss"

keras.callbacks.EarlyStopping(monitor='val_loss', patience=5)

and when you do not set validation_set for your model so you dont have val_loss.

so you should set validation_set for your model inside "fit" function or change the monitor parameter value to "loss"

keras.callbacks.EarlyStopping(monitor='loss', patience=5)
Answered By: Ali karimi

Change this line from

'val_loss'

to

'loss'
ES = EarlyStopping(monitor="val_loss", min_delta=0.001, patience=3, mode="min", verbose=1)

change to…

ES = EarlyStopping(monitor="loss", min_delta=0.001, patience=3, mode="min", verbose=1)
Answered By: Kritthanit.M

Also, check your validation_freq input as val_loss parameter is only available after it is computed the first time.

i.e. if your early stoping method stops at epoch 5 and your validation_freq is set to 10, the val_loss won’t be available

The early stopping keras callback requires you to compute the val_loss on each epoch for the monitoring purpose

Answered By: Pipper Tetsing

If you are too lazy – try something like this.
Quite often EarlyStopping would fail because it runs too soon. Setting "strict" to "false" allows to avoid crashing after failing to check the metrics. "verbose" would also write what happened, so you can confirm that it stopped crashing after the first epoch.

EarlyStopping(
 monitor="val_loss",
 mode="min",
 verbose = True,
 strict=False)
Answered By: dc914337
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.