InvalidArgumentError: Input to reshape is a tensor with 7872512 values, but the requested shape requires a multiple of 6400

Question:

hello every one i am new to deeplearning and here i am trying to implement a image classifier but when i am running cell containing fit method it is showing the following error i don’t know how to resolve this error i have check various similar posts also on this error but i am not getting them please help me if you know how to resolve it also plz tell me a short detail why this problem occurs.

InvalidArgumentError: Input to reshape is a tensor with 7872512
values, but the requested shape requires a multiple of 6400 [[node
sequential_7/flatten_6/Reshape (defined at
:1) ]]
[Op:__inference_train_function_5120]

Function call stack: train_function

import pandas as pd
import numpy as np
import cv2
import tensorflow as tf
from   keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)
training_set = train_datagen.flow_from_directory('drive/MyDrive/EmotionDetectionDataset/dataset/train')


test_datagen = ImageDataGenerator(rescale=1. / 255)
validation_generator = test_datagen.flow_from_directory('drive/MyDrive/EmotionDetectionDataset/dataset/test',target_size=(48,48),batch_size=32,class_mode='categorical')

test_set = validation_generator
cnn = tf.keras.models.Sequential()


cnn.add(tf.keras.layers.Conv2D(filters=32,kernel_size = 3,activation='relu',input_shape=[48,48,3]))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2))



cnn.add(tf.keras.layers.Conv2D(filters=64,kernel_size = 3,activation='relu'))
cnn.add(tf.keras.layers.MaxPool2D(pool_size=(2,2),strides=2))

cnn.add(tf.keras.layers.Flatten())

cnn.add(tf.keras.layers.Dense(units=128,activation='relu'))


cnn.add(tf.keras.layers.Dense(units=7,activation='relu'))

cnn.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy'])


cnn.fit(x=training_set,validation_data=test_set,epochs=25)
Asked By: user14233799

||

Answers:

You forgot to specify the target size for the training set, the default value used is (256,256) but you are using (48,48):

training_set = train_datagen.flow_from_directory('drive/MyDrive/EmotionDetectionDataset/dataset/train', target_size=(48,48),batch_size=32,class_mode='categorical')
validation_generator = test_datagen.flow_from_directory('drive/MyDrive/EmotionDetectionDataset/dataset/test',target_size=(48,48),batch_size=32,class_mode='categorical')
Answered By: yudhiesh

The shape of target_size in ImageDataGenerator is not the same as the input_shape in the model architecture. Make sure both are same.

You can define the required custom size for ImageDataGenerator using target_size. And for model architecture you can use input_shape inside Conv2D.

training_set = train_datagen.flow_from_directory('path',  target_size=(48,48))

and

cnn.add(tf.keras.layers.Conv2D(filters=32,kernel_size = 3,activation='relu',input_shape=[48,48,3]))
Answered By: Abu Bakar Siddik