问题
I am trying to run parallel path CNN, which is concatenated with a dense layer. I have named the first path as model1 and second part as model2 and the concatenated model containing parallel pats as model. I have compiled the model and the model summary also is working. Now I have to train the model. For that I have given the input to the CNN model is given as model.fit.generator. I am using keras 2.1.6 version.
base_model1 = model.fit_generator(["train_generator","train_generator"], steps_per_epoch =
nb_train_samples // batch_size, epochs = epochs, validation_data = validation_generator,
validation_steps = nb_validation_samples // batch_size)
the error message I receive is
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
Epoch 1/2
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-14-ba6c6e678605> in <module>()
3 epochs = epochs,
4 validation_data = validation_generator,
----> 5 validation_steps = nb_validation_samples // batch_size)
4 frames
/usr/local/lib/python3.6/dist-packages/keras/utils/data_utils.py in _data_generator_task(self)
656 # => Serialize calls to
657 # infinite iterator/generator's next() function
--> 658 generator_output = next(self._generator)
659 self.queue.put((True, generator_output))
660 else:
TypeError: 'list' object is not an iterator
And my model is
# Conv Layer 1
model1.add(layers.SeparableConv2D(32, (9, 9), activation='relu', input_shape=input_shape))
model1.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# Conv Layer 2
model1.add(layers.SeparableConv2D(64, (9, 9), activation='relu'))
model1.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# Conv Layer 3
model1.add(layers.SeparableConv2D(128, (9, 9), activation='relu'))
model1.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# model.add(layers.SeparableConv2D(256, (9, 9), activation='relu'))
# model.add(layers.MaxPooling2D(2, 2))
# Flatten the data for upcoming dense layer
model1.add(layers.Flatten())
model1.add(layers.Dropout(0.5))
model1.add(layers.Dense(512, activation='relu'))
#model1.add(layers.Dense(output_classes,) activation='relu'))
#model1.build(input_shape = (input_shape)
model2 = Sequential()
# Conv Layer 1
model2.add(layers.SeparableConv2D(32, (9, 9), activation='relu', input_shape=input_shape))
model2.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# Conv Layer 2
model2.add(layers.SeparableConv2D(64, (9, 9), activation='relu'))
model2.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# Conv Layer 3
model2.add(layers.SeparableConv2D(128, (9, 9), activation='relu'))
model2.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))
# model.add(layers.SeparableConv2D(256, (9, 9), activation='relu'))
# model.add(layers.MaxPooling2D(2, 2))
# Flatten the data for upcoming dense layer
model2.add(layers.Flatten())
model2.add(layers.Dropout(0.5))
model2.add(layers.Dense(512, activation='relu'))
#model2.add(layers.Dense(output_classes, activation='relu'))
from keras.layers import concatenate
model = Sequential()
model_concat = concatenate([model1.output, model2.output], axis=-1)
model_concat = Dense(128, activation='relu')(model_concat)
model_concat = Dense(7, activation='softmax')(model_concat)
model = Model(inputs=[model1.input, model2.input], outputs=model_concat)
print(model.summary()) ```
My generator code is
```train_generator = train_datagen.flow_from_directory(
TRAIN_FOLDER,
target_size=(img_height, img_width),
batch_size=batch_size,
seed = random_seed,
shuffle = False,
subset = 'training',
class_mode='categorical')
validation_generator = train_datagen.flow_from_directory(
TRAIN_FOLDER,
target_size=(img_height, img_width),
batch_size=batch_size,
seed = random_seed,
shuffle = False,
subset = 'validation',
class_mode='categorical')
test_datagen = ImageDataGenerator(rescale=1. / 255)
test_generator = test_datagen.flow_from_directory(
TEST_FOLDER,
target_size=(img_height, img_width),
batch_size=batch_size,
seed = random_seed,
shuffle = False,
class_mode='categorical')```
回答1:
You are getting the error because your combined model expects two inputs every time, but your data generator returns one input. Giving the generator as a list also does not work properly. I have created an equivalent of your model, which expects one input and works properly with your data generator:
from keras import layers, Model
input_shape = (128,128,3) # Change this accordingly
my_input = layers.Input(shape=input_shape) # one input
I realized both your parallel models use the same architecture, so I created a function that does it every time you call it, and we will call it twice (to get two parallel models):
def parallel_layers(my_input, parallel_id=1):
x = layers.SeparableConv2D(32, (9, 9), activation='relu', name='conv_1_'+str(parallel_id))(my_input)
x = layers.MaxPooling2D(2, 2)(x)
x = layers.SeparableConv2D(64, (9, 9), activation='relu', name='conv_2_'+str(parallel_id))(x)
x = layers.MaxPooling2D(2, 2)(x)
x = layers.SeparableConv2D(128, (9, 9), activation='relu', name='conv_3_'+str(parallel_id))(x)
x = layers.MaxPooling2D(2, 2)(x)
x = layers.Flatten()(x)
x = layers.Dropout(0.5)(x)
x = layers.Dense(512, activation='relu')(x)
return x
Let us now call the parallel layers function on our input twice:
parallel1 = parallel_layers(my_input, 1)
parallel2 = parallel_layers(my_input, 2)
We will now concatenate their outputs and create the final model:
concat = layers.Concatenate()([parallel1, parallel2])
x = layers.Dense(128, activation='relu')(concat)
x = Dense(7, activation='softmax')(x)
final_model = Model(inputs=my_input, outputs=x)
final_model.summary()
This model would (hopefully) work with your data generator as follows:
final_model.fit_generator(train_generator, steps_per_epoch =
nb_train_samples // batch_size, epochs = epochs, validation_data = validation_generator,
validation_steps = nb_validation_samples // batch_size)
回答2:
You must change this line:
base_model1 = model.fit_generator(["train_generator","train_generator"] ...
to:
base_model1 = model.fit_generator([train_generator,train_generator]...
This error means you use only a list instead your train data.the data generator cant iterate over a list.
来源:https://stackoverflow.com/questions/61406988/model-fit-generator-for-dual-path-cnn