nvidia

Tensorflow cannot open libcuda.so.1

笑着哭i 提交于 2020-07-17 09:47:19
问题 I have a laptop with a GeForce 940 MX. I want to get Tensorflow up and running on the gpu. I installed everything from their tutorial page, now when I import Tensorflow, I get >>> import tensorflow as tf I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcublas.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcudnn.so locally I tensorflow/stream_executor/dso_loader.cc:128] successfully opened CUDA library libcufft

How to install nvidia apex on Google Colab

时光总嘲笑我的痴心妄想 提交于 2020-06-27 09:03:27
问题 what I did is follow the instruction on the official github site !git clone https://github.com/NVIDIA/apex !cd apex !pip install -v --no-cache-dir ./ it gives me the error: ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. Exception information: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/pip/_internal/cli/base_command.py", line 178, in main status = self.run(options, args) File "/usr/local/lib/python3.6/dist-packages

CMake detects a wrong version of OpenCL

这一生的挚爱 提交于 2020-06-22 04:23:05
问题 Following this post, where I have used these instructions to install NVIDIA's OpenCL SDK. The clinfo tool detects a 1.2 OpenCL version correctly. However, The below CMakeLists.txt file: cmake_minimum_required(VERSION 3.1) project(OpenCL_Example) find_package(OpenCL REQUIRED) include_directories(${OpenCL_INCLUDE_DIRS}) link_directories(${OpenCL_LIBRARY}) add_executable(main main.c) target_include_directories(main PUBLIC ${CMAKE_CURRENT_SOURCE_DIR}) target_link_libraries(main ${OpenCL_LIBRARY})

How many CUDA cores is used to process a CUDA warp?

霸气de小男生 提交于 2020-06-17 15:49:47
问题 I'm reading for the answers and there are conflict ideas: In this link https://www.3dgep.com/cuda-thread-execution-model/, two warps (64 threads) can run concurrently on an SM (32 CUDA cores). So, I understand that the threads on a warp are splited and be processed on 16 CUDA cores. This idea makes sense for me because each CUDA core has 1 32bitALU. However, in other links, they claimed that 1 CUDA core is able to handle 32 concurrent threads (same as a warp size) (https://cvw.cac.cornell.edu

Compiling clinfo with NVIDIA's OpenCL SDK leads to error C2061: syntax error: identifier 'cl_device_affinity_domain'

早过忘川 提交于 2020-06-17 09:45:29
问题 Following this issue, I'm trying to compile the clifo tool using the MSVC toolchain. I use this CMakeLists.txt file which successfully finds NVIDIA's OpenCL SDK: Found OpenCL: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v3.2/lib/Win32/OpenCL.lib (found version "1.1") However when compiling with cmake --build . I get many errors, from which the first one is: c:\path\to\clinfo\src\info_ret.h(43): error C2061: syntax error: identifier 'cl_device_affinity_domain' [C:\path\to\clinfo\build

nvidia-smi does not display memory usage [closed]

蓝咒 提交于 2020-05-11 05:41:28
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I want to use nvidia-smi to monitor my GPU for my machine-learning/ AI projects. However, when I run nvidia-smi in my cmd, git bash or powershell, I get the following results: $ nvidia-smi Sun May 28 13:25:46 2017 +-----------------------------------------------------------------------

nvidia-smi does not display memory usage [closed]

有些话、适合烂在心里 提交于 2020-05-11 05:39:49
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I want to use nvidia-smi to monitor my GPU for my machine-learning/ AI projects. However, when I run nvidia-smi in my cmd, git bash or powershell, I get the following results: $ nvidia-smi Sun May 28 13:25:46 2017 +-----------------------------------------------------------------------

nvidia-smi does not display memory usage [closed]

烂漫一生 提交于 2020-05-11 05:39:10
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I want to use nvidia-smi to monitor my GPU for my machine-learning/ AI projects. However, when I run nvidia-smi in my cmd, git bash or powershell, I get the following results: $ nvidia-smi Sun May 28 13:25:46 2017 +-----------------------------------------------------------------------

Installing cuda via brew and dmg

∥☆過路亽.° 提交于 2020-05-11 05:24:06
问题 After attempting to install nvidia toolkit on MAC by following guide : http://docs.nvidia.com/cuda/cuda-installation-guide-mac-os-x/index.html#axzz4FPTBCf7X I received error "Package manifest parsing error" which led me to this : NVidia CUDA toolkit 7.5.27 failing to install on OS X . I unmounted the dmg and upshot was that instead of receiving "Package manifest parsing error" the installer would not launch (it seemed to launch briefly , then quit). Installing via command brew install

what is Device interconnect StreamExecutor with strength 1 edge matrix

我的未来我决定 提交于 2020-05-09 19:36:36
问题 I have four NVIDIA GTX 1080 graphic cards and when I'm initializing a session I see following console output: Adding visible gpu devices: 0, 1, 2, 3 Device interconnect StreamExecutor with strength 1 edge matrix: 0 1 2 3 0: N Y N N 1: Y N N N 2: N N N Y 3: N N Y N And as well I have 2 NVIDIA M60 Tesla graphic cards and the initialization looks like: Adding visible gpu devices: 0, 1, 2, 3 Device interconnect StreamExecutor with strength 1 edge matrix: 0 1 2 3 0: N N N N 1: N N N N 2: N N N N 3