torch.cuda.is_avaiable returns False with nvidia-smi not working

淺唱寂寞╮ 提交于 2021-02-05 10:43:33

问题


I'm trying to build a docker image that can run using GPUS, this my situation:

I have python 3.6 and I am starting from image nvidia/cuda:10.0-cudnn7-devel. Torch does not see my GPUs.

nvidia-smi is not working too, returning error:

> Failed to initialize NVML: Unknown Error
> The command '/bin/sh -c nvidia-smi' returned a non-zero code: 255

I installed nvidia toolkit and nvidia-smi with

 RUN apt install nvidia-cuda-toolkit -y
 RUN apt-get install nvidia-utils-410 -y

回答1:


I figured out the problem is you can't use nvidia-smi during building (RUN nvidia-smi). Any check related to the avaiability of the GPUs during building won't work.

Using CMD bin/bash and typing the command python3 -c 'import torch; print(torch.cuda.is_avaiable())', I finally get True. I also removed

RUN apt install nvidia-cuda-toolkit -y
RUN apt-get install nvidia-utils-410 -y

as suggested from @RobertCrovella



来源:https://stackoverflow.com/questions/63325112/torch-cuda-is-avaiable-returns-false-with-nvidia-smi-not-working

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!