I am trying to train a network on Caffe. I have image size of 512x640. Batch size is 1. I\'m trying to implement FCN-8s.
I am currently running this on a Amazon EC2
Caffe can use multiple GPU's. This is only supported in the C++ interface, not in the python one. You could also enable cuDNN for a lower memory footprint.
https://github.com/BVLC/caffe/blob/master/docs/multigpu.md
The error you get is indeed out of memory, but it's not the RAM, but rather GPU memory (note that the error comes from CUDA).
Usually, when caffe is out of memory - the first thing to do is reduce the batch size (at the cost of gradient accuracy), but since you are already at batch size = 1...
Are you sure batch size is 1 for both TRAIN and TEST phases?
I was facing a similar issue when running Deeplab v2 on a PC with following configuration:
----------
OS: Ubuntu 18.04.3 LTS (64-bit)
----------
Processor: Intel Core i7-6700k CPU @ 4.00 GHz x 8
----------
GPU: GeForce GTX 780 (3022 MiB)
----------
RAM : 31.3 GiB
----------
Changing both the test and training batch sizes to 1 didn't help me. But, changing the dimensions of the output image sure did!