Google Colab is very slow compared to my PC

Question:

I’ve recently started to use Google Colab, and wanted to train my first Convolutional NN. I imported the images from my Google Drive thanks to the answer I got here.

Then I pasted my code to create the CNN into Colab and started the process.
Here is the complete code:

Part 1: Setting up Colab to import picture from my Drive

(part 1 is copied from here as it worked as exptected for me

Step 1:

!apt-get install -y -qq software-properties-common python-software-properties module-init-tools
!add-apt-repository -y ppa:alessandro-strada/ppa 2>&1 > /dev/null
!apt-get update -qq 2>&1 > /dev/null
!apt-get -y install -qq google-drive-ocamlfuse fuse

Step 2:

from google.colab import auth
auth.authenticate_user()

Step 3:

from oauth2client.client import GoogleCredentials
creds = GoogleCredentials.get_application_default()
import getpass
!google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret} < /dev/null 2>&1 | grep URL
vcode = getpass.getpass()
!echo {vcode} | google-drive-ocamlfuse -headless -id={creds.client_id} -secret={creds.client_secret}

Step 4:

!mkdir -p drive
!google-drive-ocamlfuse drive

Step 5:

print('Files in Drive:')
!ls drive/

Part 2: Copy pasting my CNN

I created this CNN with tutorials from a Udemy Course. It uses keras with tensorflow as backend.
For the sake of simplicity I uploaded a really simple version, which is plenty enough to show my problems

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten 
from keras.layers import Dense 
from keras.layers import Dropout
from keras.optimizers import Adam 
from keras.preprocessing.image import ImageDataGenerator 

parameters

imageSize=32

batchSize=64

epochAmount=50

CNN

classifier=Sequential() 

classifier.add(Conv2D(32, (3, 3), input_shape = (imageSize, imageSize, 3), activation = 'relu')) #convolutional layer

classifier.add(MaxPooling2D(pool_size = (2, 2))) #pooling layer

classifier.add(Flatten())

ANN

classifier.add(Dense(units=64, activation='relu')) #hidden layer

classifier.add(Dense(units=1, activation='sigmoid')) #output layer

classifier.compile(optimizer = "adam", loss = 'binary_crossentropy', metrics = ['accuracy']) #training method

image preprocessing

train_datagen = ImageDataGenerator(rescale = 1./255,
                               shear_range = 0.2,
                               zoom_range = 0.2,
                               horizontal_flip = True)

test_datagen = ImageDataGenerator(rescale = 1./255) 

training_set = train_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/training_set',
                                             target_size = (imageSize, imageSize),
                                             batch_size = batchSize,
                                             class_mode = 'binary')

test_set = test_datagen.flow_from_directory('drive/School/sem-2-2018/BSP2/UdemyCourse/CNN/dataset/test_set',
                                        target_size = (imageSize, imageSize),
                                        batch_size = batchSize,
                                        class_mode = 'binary')

classifier.fit_generator(training_set,
                     steps_per_epoch = (8000//batchSize),
                     epochs = epochAmount,
                     validation_data = test_set,
                     validation_steps = (2000//batchSize))

Now comes my Problem

First of, the training set I used is a database with 10000 dog and cat pictures of various resolutions. (8000 training_set, 2000 test_set)

I ran this CNN on Google Colab (with GPU support enabled) and on my PC (tensorflow-gpu on GTX 1060)

This is an intermediate result from my PC:

Epoch 2/50
63/125 [==============>...............] - ETA: 2s - loss: 0.6382 - acc: 0.6520

And this from Colab:

Epoch 1/50
13/125 [==>...........................] - ETA: 1:00:51 - loss: 0.7265 - acc: 0.4916

Why is Google Colab so slow in my case?

Personally I suspect a bottleneck consisting of pulling and then reading the images from my Drive, but I don’t know how to solve this other than choosing a different method to import the database.

Asked By: charelf

||

Answers:

It’s very slow to read file from google drives.

For example, I have one big file(39GB).

It cost more than 10min when I exec ‘!cp drive/big.file /content/’.

After I shared my file, and got the url from google drive. It cost 5 min when I exec ‘! wget -c -O big.file http://share.url.from.drive‘. Download speed can up to 130MB/s.

Answered By: Feng

As @Feng has already noted, reading files from drive is very slow. This tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the IO Operations are much faster (for a complete explanation on the speed gain of hdf5 format see this).

Answered By: MROB

I have the same question as to why the GPU on colab seems to be taking at least just as long as my local pc so I can’t really be of help there. But with that being said, if you are trying to use your data locally, I have found the following process to be significantly faster than just using the upload function provided in colab.

1.) mount google drive

# Run this cell to mount your Google Drive.
from google.colab import drive
drive.mount('/content/drive')

2.) create a folder outside of the google drive folder that you want your data to be stored in

3.) use the following command to copy the contents from your desired folder in google drive to the folder you created

  !ln -s "/content/drive/My Drive/path_to_folder_desired" "/path/to/the_folder/you created"

(this is referenced from another stackoverflow response that I used to find a solution to a similar issue )

4.) Now you have your data available to you at the path, “/path/to/the_folder/you created”

Answered By: Nadimprodutions

You can load your data as numpy array (.npy format) and use flow method instead of flow_from_directory. Colab provides 25GB RAM ,so even for big data-sets you can load your entire data into memory. The speed up was found to be aroud 2.5x, with the same data generation steps!!!
(Even faster than data stored in colab local disk i.e ‘/content’ or google drive.

Since colab provides only a single core CPU (2 threads per core), there seems to be a bottleneck with CPU-GPU data transfer (say K80 or T4 GPU), especially if you use data generator for heavy preprocessing or data augmentation.
You can also try setting different values for parameters like ‘workers’, ‘use_multiprocessing’, ‘max_queue_size ‘ in fit_generator method …

Answered By: anilsathyan7

Reading files from google drive slow down your training process. The solution is to upload zip file to colab and unzip there. Hope it is clear for you.

Answered By: 红领巾

If you want to work on datasets from kaggle check this

Remember : Inside Google colab Linux commands are ran by prefixing ‘!’

eg :

!mkdir ~/.kaggle/kaggle.json 
!ls !unzip -q downloaded_file.zip
Answered By: Tushar Gupta

I had the same issue, and here’s how I solved it.

First, make sure GPU is enabled(because it is not by default) by going to Runtime -> Change runtime type, and choosing GPU as your Hardware accelerator.

Then, as shown here you can use cache() and prefetch() functions to optimize the performance. Example:

# Load dataset
train_ds = keras.preprocessing.image_dataset_from_directory('Data/train',labels="inferred")
val_ds = keras.preprocessing.image_dataset_from_directory('Data/test',labels="inferred")

# Standardize data (optional)
from tensorflow.keras import layers
normalization_layer = keras.layers.experimental.preprocessing.Rescaling(1./255)
train_ds = train_ds.map(lambda x, y: (normalization_layer(x), y))
val_ds = val_ds.map(lambda x, y: (normalization_layer(x), y))

# Cache to RAM (optional)
from tensorflow import data
AUTOTUNE = data.experimental.AUTOTUNE
train_ds = train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

# Train
model.fit(train_ds, validation_data=val_ds, epochs=3)
Answered By: Simon Iyamu

Google Colab instances are using some faster memory than google drive. As you are accessing files from google drive (it has a larger access time) so you are getting low speed. First copy the files to colab instance then train your network.

Answered By: Muhammad Ali Abbas

I was facing the same issue. Here’s how I solved it:-

  1. Uploaded the zip file of the dataset to google drive.
  2. Mount the drive in colab and then unzip the dataset file ‘in’ a separate folder(other than ../drive) in colab itself.
  3. Do your business.

It worked for me. I don’t know the exact reason but since colab access its local directory faster than it accesses the mounted drive directory, that may happen to be the gist of the problem.

Answered By: Mrityu

In my case, the GPU on colab is super fast compared to my Nvidia GPU card on PC- based on training speeds. However, when doing simulations, which I can only presume involves CPU, my PC is nearly 50% faster (i7, 10th Gen)

Answered By: Rowan Gontier