How to resolve following error with CNN Python Code?

Question:

Image data description: 2D binary images with 200×200 size
123 labels (classes) are present, and each class (label) contains 10 image frames, where the first 4 images I considered as test cases remaining will be the training dataset.

As per my knowledge, I change the CNN Code to classify the image data, but I am getting the following error:

WARNING:tensorflow:From C:UsershpPycharmProjectsFirstProject3venvlibsite-packagestensorflowpythonframeworkop_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.

Instructions for updating:

Colocations handled automatically by placer.

WARNING:tensorflow:From C:UsershpPycharmProjectsFirstProject3venvlibsite-packageskerasbackendtensorflow_backend.py:3445: calling dropout (from tensorflow.python.ops.nn_ops) with keep_prob is deprecated and will be removed in a future version.

Instructions for updating:

Please use `rate` instead of `keep_prob`. Rate should be set to `rate = 1 - keep_prob`.

Traceback (most recent call last):

  File "C:/Users/hp/PycharmProjects/FirstProject3/test.py", line 79, in <module>
    model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))

  File "C:UsershpPycharmProjectsFirstProject3venvlibsite-packageskerasenginetraining.py", line 952, in fit
    batch_size=batch_size)

  File "C:UsershpPycharmProjectsFirstProject3venvlibsite-packageskerasenginetraining.py", line 789, in _standardize_user_data
    exception_prefix='target')

  File "C:UsershpPycharmProjectsFirstProject3venvlibsite-packageskerasenginetraining_utils.py", line 138, in standardize_input_data
    str(data_shape))

ValueError: Error when checking target: expected dense_2 to have shape (123,) but got array with shape (124,)

How to resolve the error?

My Code:

import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
import numpy as np
import cv2
import os

path1='C:\Data\For new Paper3Old\GaitDatasetB-silh_PerfectlyAlingedImages_EnergyImage\';
all_images = []
all_labels = []
subjects = os.listdir(path1)
numberOfSubject = len(subjects)
print('Number of Subjects: ', numberOfSubject)
for number1 in range(0, numberOfSubject):  # numberOfSubject
    path2 = (path1 + subjects[number1] + '/')
    sequences = os.listdir(path2);
    numberOfsequences = len(sequences)
    for number2 in range(4, numberOfsequences):
        path3 = path2 + sequences[number2]
        img = cv2.imread(path3 , 0)
        img = img.reshape(200, 200, 1)
        all_images.append(img)
        all_labels.append(number1+1)
x_train = np.array(all_images)
y_train = np.array(all_labels)
y_train = keras.utils.to_categorical(y_train)
print(y_train)

print(x_train)


all_images = []
all_labels = []

for number1 in range(0, numberOfSubject):  # numberOfSubject
    path2 = (path1 + subjects[number1] + '/')
    sequences = os.listdir(path2);
    numberOfsequences = len(sequences)
    for number2 in range(0, 4):
        path3 = path2 + sequences[number2]
        img = cv2.imread(path3 , 0)
        img = img.reshape(200, 200, 1)
        all_images.append(img)
        all_labels.append(number1+1)
x_test = np.array(all_images)
y_test = np.array(all_labels)
y_test = keras.utils.to_categorical(y_test)
print(y_test)

print(x_test)

batch_size = 738
num_classes = 123
epochs = 12

model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5), activation='relu', input_shape=(200,200,1)))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(738, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))

model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])

model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))

score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

Reference for Code: https://towardsdatascience.com/build-your-own-convolution-neural-network-in-5-mins-4217c2cf964f

Asked By: SANJAY GUPTA

||

Answers:

Your data has 124 classes while you’re assigning num_classes=123.

The warnings are due to you have the latest tensorflow version and keras hasn’t been updated yet to fully support it.

Answered By: Vlad

Your data has 124 classes while you’re assigning num_classes=123
Check the versions of Tensorflow and Keras

Answered By: karthik mukiri
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.