Creating h5 file for storing a dataset to train super resolution GAN

Question:

I am trying to create a h5 file for storing a dataset for training a super resolution GAN. Where each training pair would be a Low resolution and a High resolution image. The dataset will contain the data in the following manner:
[[LR1,HR1],[LR2,HR2],…[LRn,HRn]]. I have 256×256 RGB images for HR and 128×128 RGB for LR.
I am a bit skeptical about the best way to store this in a h5 file and shall I scale the images by 255 before storing them in the h5 file?

I have wrote the following code to do so. Any help/suggestions would be highly appreciated.

import h5py
import numpy as np
import os
import cv2
import glob



def store_super_resolution_dataset_in_h5_file(path_to_LR,path_to_HR):
    '''This function takes the files with the same name from LR and HR folders and stores the new dataset in h5 format'''
    #create LR and HR image lists
    LR_images = glob.glob(path_to_LR+'*.jpg')
    HR_images = glob.glob(path_to_HR+'*.jpg')
    #sort the lists
    LR_images.sort()
    HR_images.sort()
    print('LR_images: ',LR_images)
    print('HR_images: ',HR_images)
    #create a h5 file
    h5_file = h5py.File('super_resolution_dataset.h5','w')
    #create a dataset in the h5 file
    dataset = h5_file.create_dataset('super_resolution_dataset',(len(LR_images),2,256,256),dtype='f')
    #store the images in the dataset
    for i in range(len(LR_images)):
        LR_image = cv2.imread(LR_images[i])
        HR_image = cv2.imread(HR_images[i])
        dataset[i,0,:,:] = LR_image
        dataset[i,1,:,:] = HR_image
    #close the h5 file
    h5_file.close()
Asked By: Deeptej

||

Answers:

There are 2 code segments below. The first code segment shows my recommended method: loading Hi-Res and Low-Res images to separate datasets to reduce the HDF5 file size. The second simply corrects errors in your code (modified to use the with/as: context manager). Both code segments begin after the #create a h5 file comment.

I ran a test with 43 images to compare resulting file sizes. Results are:

  • 1 dataset size = 66.0 MB
  • 2 dataset size = 41.3 MB (37% reduction)

Recommended method using 2 datasets:

# get image dtypes and create a h5 file
LR_dt = cv2.imread(LR_images[0]).dtype
HR_dt = cv2.imread(HR_images[0]).dtype
with h5py.File('low_hi_resolution_dataset.h5','w') as h5_file:
    #create 2 datasets for LR and HR images in the h5 file
    lr_ds = h5_file.create_dataset('low_res_dataset',(len(LR_images),128,128,3),dtype=LR_dt)
    hr_ds = h5_file.create_dataset('hi_res_dataset',(len(LR_images),256,256,3),dtype=HR_dt)
    #store the images in the dataset
    for i in range(len(LR_images)):
        LR_image = cv2.imread(LR_images[i])
        HR_image = cv2.imread(HR_images[i])
        lr_ds[i] = LR_image
        hr_ds[i] = HR_image

Modifications to your method:

# get LR image dtype and create a h5 file
LR_dt = cv2.imread(LR_images[0]).dtype
with h5py.File('super_resolution_dataset.h5','w') as h5_file:
    #create a dataset in the h5 file
    dataset = h5_file.create_dataset('super_resolution_dataset',(len(LR_images),2,256,256,3),dtype=LR_dt)
    #store the images in the dataset
    for i in range(len(LR_images)):
        LR_image = cv2.imread(LR_images[i])
        HR_image = cv2.imread(HR_images[i])
        dataset[i,0,0:128,0:128,:] = LR_image
        dataset[i,1,:,:,:] = HR_image
Answered By: kcw78
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.