how can i train my CNN model on dataset from a .csv file?

Question:

I’m a Python beginner and I’m using google Colab to train my first CNN model. I’m blocked on the training part: I know I have to use model.fit() to train the model, but I have no idea on what goes inside the model.fit(). Plus I’m not sure about the part of splitting my dataset from a CSV-file. Here’s my code:

from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import to_categorical
from keras.preprocessing import image
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.utils import to_categorical
from sklearn.metrics import accuracy_score
from tqdm import tqdm
from numpy.random import RandomState
import csv

df = pd.read_csv('classes.csv')
print(df.head(3500))
#splitting the dataset in training and testing sets.
rng = RandomState()

train = df.sample(frac=0.7, random_state=rng)
test = df.loc[~df.index.isin(train.index)]



#model's structure

model = Sequential()
#convolutional layer
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=(28,28,1)))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
#flatten output of conv
model.add(Flatten())
#hidden layer
model.add(Dense(128, activation='relu')) 
#output layer
model.add(Dense(10, activation='softmax')) 
#compiling sequential model
model.compile(loss='categorical_crossentropy',optimizer='Adam',metrics=['accuracy'])
model.summary()
#training the model
model.fit()
Asked By: taoufik souidi

||

Answers:

Ok lets set if we can help. First thing is about splitting the data frame. It is best to split it into a train set, a validation set and a test set. Best way to do that is to use sklearn’s train, test, split. The documentation for that is [here.][1] Use the code below to split the data trame into 3 sets

from sklearn.model_selection import train_test_split
train_split=.8  # percentage of df to use for training
test_split = .05 # percentage of df to use for test
# note valid_split=1- train_split = test_ split in this case .15
train_df, dummy_df= train_test_split(df, train_split=train_split, shuffle=True, random_state=123)
dummy_split=test_split/(1.0 - train_split)
test_df, valid_df=train_test_split(dummy_df, train_split=dummy_split) # note dummy_df is 20% of df

now you need to create the train, test and validation generators. I am assuming you have image inputs. Use ImageDataGenerator.flow_from_dataframe() to create your generators. Documentation for that is [here.][2] It is typical to scale the pixel images either in the range 0 to 1 or in the range -1 to +1. I prefer the later. below is the code for the generators

def scalar(img):
       return img/127.5 -1
x_col='Filename' # set this to point to the df column that holds the full path to the image file
y_col- 'Label' # set this to the df column that holds the class labels
batch_size = 32 # set this to the batch size
class_mode='categorical'
image_shape=(64,64) # set this to desired image size
tgen=tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=scalar, horizontal_flip=True)
gen= tf.keras.preprocessing.image.ImageDataGenerator(preprocessing_function=scalar)
train_gen=tgen.flow_from_dataframe(train_df, x_col=x_col, 
                                   y_col=y_col,target_size=image_shape,
                                   class_mode=class_mode, batch_size=batch_size, 
                                    shuffle=True, random_state=123)                                   
test_gen=tgen.flow_from_dataframe(test_df, x_col= x_col, y_col=y_col 
                                   class_mode=class_mode, 
                                   batch_size=batch_size, shuffle=False)                                        
valid_gen=tgen.flow_from_dataframe(valid_df, x_col=x_col, shuffle=False,
                                   y_col=y_col,batch_size=batch_size 
                                   class_mode=class_mode , target_size=image_shape)

now it is time to train your model using model.fit. Documentation is [here.][3]

epochs = 15 # set this to the number of epochs to run
history=model.fit(train_gen, epochs=epochs, validation_data= valid_gen, verbose=1)

When the model has completed training you want to see how well it performs on the test set. You do this doing model.evaluate as shown below

accuracy = model.evaluate(test_gen, verbose=1)[1]
print (accuracy)

You can use your model to make predictions using model.predict

preds=model.predict(test_gen, verbose=1)

preds is a matrix with the number of rows equal to the number of samples in the test set and the number of columns is equal to the number of classes you have. Each entry is a probability value that the image is in that class. You want to select the column with the highest probability value. Use np.argmax to find the column(class) with the highest probability value. You can get the test_set labels using labels-test_gen.labels. Code is below

correct_preds=0
for i in range (len(preds):
    index=np.argmax(preds[i]) # column with highest probability
    label=test_gen.labels[i] # true class of image
    if index == label:
        correct_preds +=1
pred_accuracy= correct_preds/ len(preds)
print ( pred_accuracy)
    
                                
 


  [1]: https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html
  [2]: https://keras.io/api/preprocessing/image/
  [3]: https://www.tensorflow.org/api_docs/python/tf/keras/Model
Answered By: Gerry P