Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory

前端 未结 6 1186
盖世英雄少女心
盖世英雄少女心 2020-12-08 04:45

Using ResNet50 pre-trained Weights I am trying to build a classifier. The code base is fully implemented in Keras high-level Tensorflow API. The complete code is posted in t

相关标签:
6条回答
  • 2020-12-08 05:36

    I was running a small model on a CPU and had the same issue. Adding:os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' resolved it.

    0 讨论(0)
  • 2020-12-08 05:37

    Alternatively, you can set the environment variable TF_CPP_MIN_LOG_LEVEL=2 to filter out info and warning messages. I found that on this github issue where they complain about the same output. To do so within python, you can use the solution from here:

    import os
    import tensorflow as tf
    os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
    

    You can even turn it on and off at will with this. I test for the maximum possible batch size before running my code, and I can disable warnings and errors while doing this.

    0 讨论(0)
  • 2020-12-08 05:40

    I was getting the same error and i tried setting os.environment flag...but it didn't work out.

    Then i went ahead and reduced my batch size from 16 to 8, and it started to work fine after then. As it is because, the train method takes into account the batch size...i feel, reducing the image size would also work..as mentioned above as well.

    0 讨论(0)
  • 2020-12-08 05:41

    Try reducing batch_size attribute to a small number(like 1,2 or 3). Example:

    train_generator = data_generator.flow_from_directory(
        'path_to_the_training_set',
        target_size = (IMG_SIZE,IMG_SIZE),
        batch_size = 2,
        class_mode = 'categorical'
        )
    
    0 讨论(0)
  • 2020-12-08 05:43

    I was having the same problem while running Tensorflow container with Docker and Jupyter notebook. I was able to fix this problem by increasing the container memory.

    On Mac OS, you can easily do this from:

           Docker Icon > Preferences >  Advanced > Memory
    

    Drag the scrollbar to maximum (e.g. 4GB). Apply and it will restart the Docker engine.

    Now run your tensor flow container again.

    It was handy to use the docker stats command in a separate terminal It shows the container memory usage in realtime, and you can see how much memory consumption is growing:

    CONTAINER ID   NAME   CPU %   MEM USAGE / LIMIT     MEM %    NET I/O             BLOCK I/O           PIDS
    3170c0b402cc   mytf   0.04%   588.6MiB / 3.855GiB   14.91%   13.1MB / 3.06MB     214MB / 3.13MB      21
    
    0 讨论(0)
  • 2020-12-08 05:47

    I was having the same problem, and i concluded that there are two factors to be considered when see this error: 1- batch_size ==> because this responsible for the data size to be processed for each epoch 2- image_size ==> the higher image dimensions (image size), more data to be processed

    So for these two factors, the RAM cannot handle all of required data.

    To solve the problem I tried two cases: The first change batch_size form 32 to 3 or 2 The second reduce image_size from (608,608) to (416,416)

    0 讨论(0)
提交回复
热议问题