how to save resized images using ImageDataGenerator and flow_from_directory in keras

前端 未结 4 1918
情深已故
情深已故 2021-02-14 06:47

I am resizing my RGB images stored in a folder(two classes) using following code:

from keras.preprocessing.image import ImageDataGenerator
dataset=ImageDataGener         


        
相关标签:
4条回答
  • 2021-02-14 07:07

    The flow_from_directory method gives you an "iterator", as described in your output. An iterator doesn't really do anything on its own. It's waiting to be iterated over, and only then the actual data will be read and generated.

    An iterator in Keras for fitting is to be used like this:

    generator = dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10)
    
    for inputs,outputs in generator:
    
        #do things with each batch of inputs and ouptus
    

    Normally, instead of doing the loop above, you just pass the generator to a fit_generator method. There is no real need to do a for loop:

    model.fit_generator(generator, ......)
    

    Keras will only save images after they're loaded and augmented by iterating over the generator.

    0 讨论(0)
  • 2021-02-14 07:15

    Heres a very simple version of saving augmented images of one image wherever you want:

    Step 1. Initialize image data generator

    Here we figure out what changes we want to make to the original image and generate the augmented images
    You can read up about the diff effects here- https://keras.io/preprocessing/image/

    datagen = ImageDataGenerator(rotation_range=10, width_shift_range=0.1, 
    height_shift_range=0.1,shear_range=0.15, 
    zoom_range=0.1,channel_shift_range = 10, horizontal_flip=True)
    

    Step 2: Here we pick the original image to perform the augmentation on

    read in the image

    image_path = 'C:/Users/Darshil/gitly/Deep-Learning/My 
    Projects/CNN_Keras/test_augment/caty.jpg'
    
    image = np.expand_dims(ndimage.imread(image_path), 0)
    

    step 3: pick where you want to save the augmented images

    save_here = 'C:/Users/Darshil/gitly/Deep-Learning/My 
    Projects/CNN_Keras/test_augment'
    

    Step 4. we fit the original image

    datagen.fit(image)
    

    step 5: iterate over images and save using the "save_to_dir" parameter

    for x, val in zip(datagen.flow(image,                    #image we chose
            save_to_dir=save_here,     #this is where we figure out where to save
             save_prefix='aug',        # it will save the images as 'aug_0912' some number for every new augmented image
            save_format='png'),range(10)) :     # here we define a range because we want 10 augmented images otherwise it will keep looping forever I think
    pass
    
    0 讨论(0)
  • 2021-02-14 07:17

    Its only a declaration, you must use that generator, for example, .next()

    from keras.preprocessing.image import ImageDataGenerator
    dataset=ImageDataGenerator()
    image = dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10)
    image.next()
    

    then you will see images in /home/resized

    0 讨论(0)
  • 2021-02-14 07:21

    In case you want to save the images under a folder having same name as label then you can loop over a list of labels and call the augmentation code within the loop.

    from tensorflow.keras.preprocessing.image import ImageDataGenerator  
    
    # Augmentation + save augmented images under augmented folder
    
    IMAGE_SIZE = 224
    BATCH_SIZE = 500
    LABELS = ['lbl_a','lbl_b','lbl_c']
    
    for label in LABELS:
      datagen_kwargs = dict(rescale=1./255)  
      dataflow_kwargs = dict(target_size=(IMAGE_SIZE, IMAGE_SIZE), 
                            batch_size=BATCH_SIZE, interpolation="bilinear")
    
      train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
        rotation_range=40,
        horizontal_flip=True,
        width_shift_range=0.1, height_shift_range=0.1,
        shear_range=0.1, zoom_range=0.1,
        **datagen_kwargs)
    
      train_generator = train_datagen.flow_from_directory(
          'original_images', subset="training", shuffle=True, save_to_dir='aug_images/'+label, save_prefix='aug', classes=[label], **dataflow_kwargs)
      
      # Following line triggers execution of train_generator
      batch = next(train_generator) 
    

    So why do this when generator can directly be passed to model? In case, you want to use the tflite-model-maker which does not accept a generator and accepts labelled data under folder for each label:

    from tflite_model_maker import ImageClassifierDataLoader
    data = ImageClassifierDataLoader.from_folder('aug_images')
    

    Result

    aug_images
    | 
    |__ lbl_a
    |   |
    |   |_____aug_img_a.png
    |
    |__ lbl_b
    |   |
    |   |_____aug_img_b.png
    | 
    |__ lbl_c
    |   |
    |   |_____aug_img_c.png
    

    Note: You need to ensure the folders already exist.

    0 讨论(0)
提交回复
热议问题