multi-layer grayscale input in u-net

℡╲_俬逩灬. 提交于 2020-01-16 08:35:14

问题


I have successfully trained a u-net for the specific task of cell segmentation using (256, 256, 1) grayscale input and (256, 256, 1) binary label. I used zhixuhao's unet implemention in Keras (git rep. here).What I am trying to do now is to train the same model using multiple grayscale layer as input.

To make things easier, let's assume I want to use 2 grayscale image im1 and im2, each of size (256, 256, 1). Label Y is the same for im1 and im2. I want to feed the model an input of size (256, 256, 2), where the first component of the 3rd axis is im1 and the second one is im2.

To that end, I changed the train data generator to :

def MultipleInputGenerator(train_path, sub_path_1, sub_path_2, image_folder='images', mask_folder='masks', batch_size, aug_dict, images_color_mode='grayscale', masks_color_mode='grayscale',
            flag_multi_class=False, num_class=2, target_size=(256,256), seed=1):

    # Keras generator
    image_datagen = ImageDataGenerator(**aug_dict)
    mask_datagen = ImageDataGenerator(**aug_dict)

    # Multiple input data augmentation
    image_generator_1 = image_datagen.flow_from_directory(
            sub_path_1,
            classes = [image_folder],
            class_mode = None,
            color_mode = images_color_mode,
            target_size = target_size,
            batch_size = batch_size,
            seed = seed)

    image_generator_2 = image_datagen.flow_from_directory(
            sub_path_2,
            classes = [image_folder],
            class_mode = None,
            color_mode = images_color_mode,
            target_size = target_size,
            batch_size = batch_size,
            save_to_dir = save_to_dir,
            save_prefix  = image_save_prefix,
            seed = seed)

    mask_generator = mask_datagen.flow_from_directory(
            train_path,
            classes = [mask_folder],
            class_mode = None,
            color_mode = masks_color_mode,
            target_size = target_size,
            batch_size = batch_size,
            save_to_dir = save_to_dir,
            save_prefix  = mask_save_prefix,
            seed = seed)

    train_generator = zip(image_generator_1, image_generator_2, mask_generator)

    for (img1, img2, mask) in train_generator:
        img1, mask1 = adjustData(img1, mask, flag_multi_class, num_class)
        img2, mask2 = adjustData(img2, mask, flag_multi_class, num_class)
        yield (np.stack((img1, img2), axis=0), mask1)

with adjustData being an auxillary function which normalises the arrays from [0, 255] to [0, 1]

As you can see, I've tried to stack grayscale arrays in a single input. When creating the unet model, I changed the input size from (256, 256, 1) to (256, 256, 2) :

train_gen = MultipleInputGenerator(train_folder, sub_path_1, sub_path_2, batch_size, aug_dict=data_gen_args)
model = unet(input_size=(256,256,2))
model.fit_generator(train_gen, steps_per_epoch=train_steps, epochs=epochs)

Yet, when lauching the command : python3 main.py, it starts loading the data correctly but then fails to train the model :

Found 224 images belonging to 1 classes.
Epoch 1/2
Found 224 images belonging to 1 classes.
Found 224 images belonging to 1 classes.
Traceback (most recent call last):
  File "main.py", line 50, in <module>
    model.fit_generator(train_gen, steps_per_epoch=train_steps, epochs=epochs)
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/engine/training.py", line 1732, in fit_generator
    initial_epoch=initial_epoch)
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/engine/training_generator.py", line 220, in fit_generator
    reset_metrics=False)
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/engine/training.py", line 1508, in train_on_batch
    class_weight=class_weight)
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/engine/training.py", line 579, in _standardize_user_data
    exception_prefix='input')
  File "*/virtenv/env1/lib/python3.6/site-packages/keras/engine/training_utils.py", line 135, in standardize_input_data
    'with shape ' + str(data_shape))
ValueError: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (2, 32, 256, 256, 1)

with 32 being the batch_size.

Has anyone already managed to train a unet (or any other CNN) with multi-layer input other than RGB images ? Or does anyone have an idea of how I could get thing working ?

Thanks.


回答1:


Your expected input shape is (32, 256, 256, 2) whereas the output shape of your generator is (2, 32, 256, 256, 1). It's because np.stack is adding one additional dimension than the input arrays. You can solve this by using np.concatenate instead of np.stack in your train_generator [last line of code block] like following:

yield (np.concatenate((img1, img2), axis=-1), mask1)

Hope it will help.




回答2:


As suggested by @bit01, np.stack is adding one additional dimension than the input arrays! To get things working I edited the last line of the MultipleInputTrainGenerator function as below :

img = np.squeeze(np.stack((img1, img2), axis=3), axis=4)
yield (img, mask1)

It should work to with np.concatenate too but I didn't try it out.



来源:https://stackoverflow.com/questions/59087054/multi-layer-grayscale-input-in-u-net

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!