How to prevent tensorflow from allocating the totality of a GPU memory?

前端 未结 16 2224
南旧
南旧 2020-11-22 04:26

I work in an environment in which computational resources are shared, i.e., we have a few server machines equipped with a few Nvidia Titan X GPUs each.

For small to m

16条回答
  •  失恋的感觉
    2020-11-22 05:11

    Here is an excerpt from the Book Deep Learning with TensorFlow

    In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as it is needed by the process. TensorFlow provides two configuration options on the session to control this. The first is the allow_growth option, which attempts to allocate only as much GPU memory based on runtime allocations, it starts out allocating very little memory, and as sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process.

    1) Allow growth: (more flexible)

    config = tf.ConfigProto()
    config.gpu_options.allow_growth = True
    session = tf.Session(config=config, ...)
    

    The second method is per_process_gpu_memory_fraction option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. Note: No release of memory needed, it can even worsen memory fragmentation when done.

    2) Allocate fixed memory:

    To only allocate 40% of the total memory of each GPU by:

    config = tf.ConfigProto()
    config.gpu_options.per_process_gpu_memory_fraction = 0.4
    session = tf.Session(config=config, ...)
    

    Note: That's only useful though if you truly want to bind the amount of GPU memory available on the TensorFlow process.

提交回复
热议问题