I wanna make a model with multiple inputs. So, I try to build a model like this.
# define two sets of inputs
inputA = Input(shape=(32,64,1))
inputB = Input(sh
From [800000,32,30,62]
it seems your model put all the data in one batch.
Try specified batch size like
history = model.fit([trainimage, train_product_embd],train_label, validation_data=([validimage,valid_product_embd],valid_label), epochs=10, steps_per_epoch=100, validation_steps=10, batch_size=32)
If it still OOM then try reduce the batch_size
OOM stands for "out of memory". Your GPU is running out of memory, so it can't allocate memory for this tensor. There are a few things you can do:
Dense
, Conv2D
layersbatch_size
(or increase steps_per_epoch
and validation_steps
)MaxPooling2D
layers, and increase their pool sizestrides
in your Conv2D
layersPIL
or cv2
for that)float
precision, namely np.float32
if you accidentally used np.float64
There is more useful information about this error:
OOM when allocating tensor with shape[800000,32,30,62]
This is a weird shape. If you're working with images, you should normally have 3 or 1 channels. On top of that, it seems like you are passing your entire dataset at once; you should instead pass it in batches.
Happened to me as well.
You can try reducing trainable parameters by using some form of Transfer Learning - try freezing the initial few layers and use lower batch sizes.