How to redress/improve my CNN model? How to handle validation accuracy frozen problem?

Question:

The validation set accuracy is frozen at 0.0909. Is this underfitting? How to address the issue to get better model accuracy. The model is later converted to tflite to be deployed at android.

My model:

model = Sequential([
Conv2D(filters=32, kernel_size=(3, 3), activation='relu', padding='same', input_shape=(224, 224, 3)),
MaxPool2D(pool_size=(2, 2), strides=2),
Conv2D(filters=64, kernel_size=(3, 3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2, 2), strides=2),
Conv2D(filters=128, kernel_size=(3, 3), activation='relu', padding='same'),
MaxPool2D(pool_size=(2, 2), strides=2),
Flatten(),
Dense(units=train_batches.num_classes, activation='softmax')

])

model.summary()

Layer (type) Output Shape Param #

conv2d (Conv2D) (None, 224, 224, 32) 896


max_pooling2d (MaxPooling2D) (None, 112, 112, 32) 0


conv2d_1 (Conv2D) (None, 112, 112, 64) 18496


max_pooling2d_1 (MaxPooling2 (None, 56, 56, 64) 0


conv2d_2 (Conv2D) (None, 56, 56, 128) 73856


max_pooling2d_2 (MaxPooling2 (None, 28, 28, 128) 0


flatten (Flatten) (None, 100352) 0


dense (Dense) (None, 11) 1103883

Total params: 1,197,131
Trainable params: 1,197,131
Non-trainable params: 0


model.compile(optimizer=Adam(learning_rate=0.01), loss=categorical_crossentropy, metrics=['accuracy'])

model.fit(x=train_batches, validation_data=valid_batches, epochs=10, verbose=2)

Epoch 1/10
53/53 - 31s - loss: 273.5211 - accuracy: 0.0777 - val_loss: 2.3989 - val_accuracy: 0.0909
Epoch 2/10
53/53 - 27s - loss: 2.4001 - accuracy: 0.0928 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 3/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0795 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 4/10
53/53 - 29s - loss: 2.4006 - accuracy: 0.0739 - val_loss: 2.3989 - val_accuracy: 0.0909
Epoch 5/10
53/53 - 29s - loss: 2.3999 - accuracy: 0.0720 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 6/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0720 - val_loss: 2.3986 - val_accuracy: 0.0909
Epoch 7/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0682 - val_loss: 2.3993 - val_accuracy: 0.0909
Epoch 8/10
53/53 - 29s - loss: 2.3995 - accuracy: 0.0871 - val_loss: 2.3986 - val_accuracy: 0.0909  
Epoch 9/10
53/53 - 29s - loss: 2.4008 - accuracy: 0.0852 - val_loss: 2.3988 - val_accuracy: 0.0909
Epoch 10/10
53/53 - 28s - loss: 2.4004 - accuracy: 0.0833 - val_loss: 2.3991 - val_accuracy: 0.0909

Answers:

Try with a lower learning rate. Also check your Dataset. I mean the dataset which you are using if it is a small one use image augmentation to increase it so that the model can learn it better. Use batch normalisation as well as regularizations techniques and LR scheduler as your gradient descent is falling into local Minima.

Answered By: Dipesh Gupta