How to prevent tensorflow from allocating the totality of a GPU memory?

前端 未结 16 2200
南旧
南旧 2020-11-22 04:26

I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.

For small to m

相关标签:
16条回答
  • 2020-11-22 04:54

    this code has worked for me:

    import tensorflow as tf
    config = tf.compat.v1.ConfigProto()
    config.gpu_options.allow_growth = True
    session = tf.compat.v1.InteractiveSession(config=config)
    
    0 讨论(0)
  • 2020-11-22 04:55
    # allocate 60% of GPU memory 
    from keras.backend.tensorflow_backend import set_session
    import tensorflow as tf 
    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.6
    set_session(tf.Session(config=config))
    
    0 讨论(0)
  • 2020-11-22 04:55

    All the answers above refer to either setting the memory to a certain extent in TensorFlow 1.X versions or to allow memory growth in TensorFlow 2.X.

    The method tf.config.experimental.set_memory_growth indeed works for allowing dynamic growth during the allocation/preprocessing. Nevertheless one may like to allocate from the start a specific GPU memory.

    The logic behind allocating a specific GPU memory would also be to prevent OOM memory during training sessions. For example, if one trains while opening video-memory consuming Chrome tabs, the tf.config.experimental.set_memory_growth(gpu, True) could result in OOM errors thrown, hence the necessity of allocating from the start more memory in certain cases.

    The recommended and correct way in which to allot memory per GPU in TensorFlow 2.X is done in the following manner:

    gpus = tf.config.experimental.list_physical_devices('GPU')
    if gpus:
      # Restrict TensorFlow to only allocate 1GB of memory on the first GPU
      try:
        tf.config.experimental.set_virtual_device_configuration(
            gpus[0],
            [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=1024)]
    
    0 讨论(0)
  • 2020-11-22 04:56

    All the answers above assume execution with a sess.run() call, which is becoming the exception rather than the rule in recent versions of TensorFlow.

    When using the tf.Estimator framework (TensorFlow 1.4 and above) the way to pass the fraction along to the implicitly created MonitoredTrainingSession is,

    opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    conf = tf.ConfigProto(gpu_options=opts)
    trainingConfig = tf.estimator.RunConfig(session_config=conf, ...)
    tf.estimator.Estimator(model_fn=..., 
                           config=trainingConfig)
    

    Similarly in Eager mode (TensorFlow 1.5 and above),

    opts = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    conf = tf.ConfigProto(gpu_options=opts)
    tfe.enable_eager_execution(config=conf)
    

    Edit: 11-04-2018 As an example, if you are to use tf.contrib.gan.train, then you can use something similar to bellow:

    tf.contrib.gan.gan_train(........, config=conf)
    
    0 讨论(0)
  • 2020-11-22 04:56

    Tensorflow 2.0 Beta and (probably) beyond

    The API changed again. It can be now found in:

    tf.config.experimental.set_memory_growth(
        device,
        enable
    )
    

    Aliases:

    • tf.compat.v1.config.experimental.set_memory_growth
    • tf.compat.v2.config.experimental.set_memory_growth

    References:

    • https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/config/experimental/set_memory_growth
    • https://www.tensorflow.org/guide/gpu#limiting_gpu_memory_growth

    See also: Tensorflow - Use a GPU: https://www.tensorflow.org/guide/gpu

    for Tensorflow 2.0 Alpha see: this answer

    0 讨论(0)
  • 2020-11-22 04:57

    You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:

    # Assume that you have 12GB of GPU memory and want to allocate ~4GB:
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
    
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
    

    The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.

    0 讨论(0)
提交回复
热议问题