Why does train data performance deteriorate dramatically?

Question:

I am training a binary classifier model that classifies between disease and non-disease.

When I run the model, training loss decreased and auc, acc, get increased.

But, after certain epoch train loss increased and auc, acc were decreased.

I don’t know why training performance got decreased after certain epoch.

I used general 1d cnn model and methods, details here:

Model table in white text on black background

Training process image. Unreadable white text output on black background

I tried already to:

  1. batch shuffle
  2. introduce class weights
  3. loss change (binary_crossentropy > BinaryFocalLoss)
  4. learning_rate change
Asked By: Zeus

||

Answers:

Your model is overfitting. This is why your accuracy increases and then begins decreasing. You need to implement Early Stopping to stop at the Epoch with the best results. You should also implement dropout layers.

Answered By: Tom Lin

Two questions for you going forward.

  1. Does the training and validation accuracy keep dropping – when you would just let it run for let’s say 100 epochs? Definitely something I would try.
  2. Which optimizer are you using? SGD? ADAM?
  3. How large is your dropout, maybe this value is too large. Try without and check whether the behavior is still the same.

It is probably the optimizer

As you do not seem to augment (this could be a potential issue if you do by accident break some label affiliation) your data, each epoch should see similar gradients. Thus I guess, at this point in your optimization process, the learning rate and thus the update step is not adjusted properly – hence not allowing to further progress into that local optimum, and rather overstepping the minimum while at the same time decreasing training and validation performance.

This is an intuitive explanation and the next things I would try are:

  • Scheduling the learning rate
  • Using a more sophisticated optimizer (starting with ADAM if you are not already using it)
Answered By: mrk