How to give variable size images as input in keras

后端 未结 3 702
我寻月下人不归
我寻月下人不归 2020-12-19 07:44

I am writing a code for image classification for two classes using keras with tensorflow backend. My images are stored in folder in computer and i want to give these images

相关标签:
3条回答
  • 2020-12-19 08:24

    See the answer in https://github.com/keras-team/keras/issues/1920 Yo you should change the input to be:

    input = Input(shape=(None, None,3))
    

    The in the end add GlobalAveragePooling2D():

    Try something like that ...

    input = Input(shape=(None, None,3))
    
    model = Sequential()
    model.add(Conv2D(8, kernel_size=(3, 3),
                     activation='relu',
                     input_shape=(None, None,3)))  #Look on the shape
    model.add(Conv2D(16, (3, 3), activation='relu'))
    model.add(MaxPooling2D(pool_size=(2, 2)))
    model.add(Dropout(0.25))
    # IMPORTANT !
    model add(GlobalAveragePooling2D())
    # IMPORTANT !
    model.add(Flatten())
    model.add(Dense(32, activation='relu'))
    model.add(Dropout(0.5))
    model.add(Dense(2, activation='softmax'))
    
    model.compile(loss='binary_crossentropy',optimizer='rmsprop',metrics=['accuracy'])
    
    0 讨论(0)
  • 2020-12-19 08:28

    Unfortunately you can't train a neural network with various size images as it is. You have to resize all images to a given size. Fortunately you don't have to do this in your hard drive, permanently by keras does this for you on hte fly.

    Inside your flow_from_directory you should define a target_size like this:

    train_generator = train_datagen.flow_from_directory(
        'data/train',
        target_size=(150, 150), #every image will be resized to (150,150) before fed to neural network
        batch_size=32,
        class_mode='binary')
    

    Also, if you do so, you can have whatever batch size you want.

    0 讨论(0)
  • 2020-12-19 08:36

    You can train variable sizes, as long as you don't try to put variable sizes in a numpy array.

    But some layers do not support variable sizes, and Flatten is one of them. It's impossible to train models containing Flatten layers with variable sizes.

    You can try, though, to replace the Flatten layer with either a GlobalMaxPooling2D or a GlobalAveragePooling2D layer. But these layers may condense too much information into a small data, so it might be necessary to add more convolutions with more channels before them.

    You must make sure that your generator will produce batches containing images of the same size, though. The generator will fail when trying to put two or more images with different sizes in the same numpy array.

    0 讨论(0)
提交回复
热议问题