可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
This code combine image and mask for image detection?
How can i correct that error?
batch_size = x.shape[0] AttributeError: 'tuple' object has no attribute 'shape'
This is the code used for training:
train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) train_datagen_1 = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(200, 150), batch_size=1 ) train_generator_1= train_datagen_1.flow_from_directory( train_data_dir_1, target_size=(200, 150), batch_size=1) train_generator_2 = zip( train_generator, train_generator_1) model.fit_generator( train_generator_2, steps_per_epoch=nb_train_samples // batch_size, epochs=50)
This is the model I'm using:
model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(200, 150, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(20)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(90000)) model.add(Activation('sigmoid')) model.compile(loss='mse', optimizer='rmsprop', metrics=['accuracy'])
回答1:
So, since your model has only one output, you cannot join two generators like that.
- A generator must output things like
(input,output)
in a tuple. - Yours is outputting
((input1,output1), (input2,output2))
, tuples inside a tuple.
When your model gets a batch from the generator, it's trying to get the shape of what it thinks is the input
, but it finds (input,output)
instead.
Working the generator:
You can probably create your own generator like this:
def myGenerator(train_generator,train_generator1): while True: xy = train_generator.next() #or next(train_generator) xy1 = train_generator1.next() #or next(train_generator1) yield (xy[0],xy1[0])
Instantiate it with:
train_generator2 = myGenerator(train_generator,train_generator1)
Now, you're going to have real trouble with the output shapes. If you're working from image to image, I recommend you work with a purely convolutional model.
A convolutional layer outputs (Batch, Side1, Side2, channels)
, which is the shape you are working with in your images.
But a dense layer outputs (Batch, size)
. This can only work if you reshape it later with Reshape((200,150,3))
to match your "true images".
Hint: a Dense 20 in the middle of the model may be too little to represent an entire image. (But of course it depends on your task).
A possible model from this task is:
Conv ... Maybe more convs MaxPooling Conv ... Maybe more convs MaxPooling Conv ...... UpSampling Conv ... UpSampling Conv ....
Every convolution with padding='same'
to make your life easier. (But since you have one dimension being 150, you will have to manage padding the values at some point, because when you reach 75, the MaxPooling will remove/add one pixel (75 cannot be divided by two).
回答2:
The selected answer is inaccurate. The reason why the code is failing is not because the tuples are of the ((input1,output1), (input2,output2)), ...)
, but because they are of the type (((input1, class1), (input2, class2), ...), ((output1, class1), (output2, class2), ...))
.
You could have fixed your problem by simply adding class_mode=None
to your flow_from_directory
calls.