Theano: Initialisation of device gpu failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY

流过昼夜 提交于 2020-01-01 08:49:31

问题


I am running the example kaggle_otto_nn.py of Keras with backend of theano. When I set cnmem=1, the following error comes out:

cliu@cliu-ubuntu:keras-examples$ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,lib.cnmem=1 python kaggle_otto_nn.py
Using Theano backend.
ERROR (theano.sandbox.cuda): ERROR: Not using GPU. Initialisation of device gpu failed:
initCnmem: cnmemInit call failed! Reason=CNMEM_STATUS_OUT_OF_MEMORY. numdev=1

/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
  "downsample module has been moved to the theano.tensor.signal.pool module.")
Traceback (most recent call last):
  File "kaggle_otto_nn.py", line 28, in <module>
    from keras.models import Sequential
  File "build/bdist.linux-x86_64/egg/keras/models.py", line 15, in <module>
  File "build/bdist.linux-x86_64/egg/keras/backend/__init__.py", line 46, in <module>
  File "build/bdist.linux-x86_64/egg/keras/backend/theano_backend.py", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/__init__.py", line 111, in <module>
    theano.sandbox.cuda.tests.test_driver.test_nvidia_driver1()
  File "/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/sandbox/cuda/tests/test_driver.py", line 38, in test_nvidia_driver1
    if not numpy.allclose(f(), a.sum()):
  File "/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/compile/function_module.py", line 871, in __call__
    storage_map=getattr(self.fn, 'storage_map', None))
  File "/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/gof/link.py", line 314, in raise_with_op
    reraise(exc_type, exc_value, exc_trace)
  File "/usr/local/lib/python2.7/dist-packages/Theano-0.8.0rc1-py2.7.egg/theano/compile/function_module.py", line 859, in __call__
    outputs = self.fn()
RuntimeError: Cuda error: kernel_reduce_ccontig_node_97496c4d3cf9a06dc4082cc141f918d2_0: out of memory. (grid: 1 x 1; block: 256 x 1 x 1)

Apply node that caused the error: GpuCAReduce{add}{1}(<CudaNdarrayType(float32, vector)>)
Toposort index: 0
Inputs types: [CudaNdarrayType(float32, vector)]
Inputs shapes: [(10000,)]
Inputs strides: [(1,)]
Inputs values: ['not shown']
Outputs clients: [[HostFromGpu(GpuCAReduce{add}{1}.0)]]

HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'.
HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

It seems like I cannot set the cnmem to a very large value (about > 0.9) since it may cause the GPU's memory overflow. And when I set cnmem=0.9, it's is working correctly. According to this, it

represents the start size (in MB or % of total GPU memory) of the memory pool.

And

This could cause memory fragmentation. So if you have a memory error while using cnmem, try to allocate more memory at the start or disable it. If you try this, report your result on :reftheano-dev.

But if I got memory error, why should I allocate more memory at the start? And in my case, allocating more memory at the start seems like causing the error.


回答1:


This is question is solved based on this.

As indicated here, the cnmem is actually only allowed to be assigned as a float.

0: not enabled.

0 < N <= 1: use this fraction of the total GPU memory (clipped to .95 for driver memory).

> 1: use this number in megabytes (MB) of memory.

So it will be working if cnmem=1.0 instead of cnmem=1.



来源:https://stackoverflow.com/questions/36099918/theano-initialisation-of-device-gpu-failed-reason-cnmem-status-out-of-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!