nvidia

what is Device interconnect StreamExecutor with strength 1 edge matrix

不问归期 提交于 2020-05-09 19:35:21
问题 I have four NVIDIA GTX 1080 graphic cards and when I'm initializing a session I see following console output: Adding visible gpu devices: 0, 1, 2, 3 Device interconnect StreamExecutor with strength 1 edge matrix: 0 1 2 3 0: N Y N N 1: Y N N N 2: N N N Y 3: N N Y N And as well I have 2 NVIDIA M60 Tesla graphic cards and the initialization looks like: Adding visible gpu devices: 0, 1, 2, 3 Device interconnect StreamExecutor with strength 1 edge matrix: 0 1 2 3 0: N N N N 1: N N N N 2: N N N N 3

How to force NVIDIA OpenCL to release GPU context to avoid memory leak

旧街凉风 提交于 2020-04-18 05:47:49
问题 This is a follow up question to an earlier question. From the discussion, the mmc code (https://github.com/fangq/mmc) appears to be fine, and the memory was properly released when running on Intel CPU and AMD GPU. However, on NVIDIA GPU, valgrind reported significant memory leak, so was the test. Every time after a cycle of creating and releasing a GPU context, the memory kept increasing. You can see this result in the below memory (blue line) profiling report. Here is the test and commands

Tensorflow 2.0 can't use GPU, something wrong in cuDNN? :Failed to get convolution algorithm. This is probably because cuDNN failed to initialize

…衆ロ難τιáo~ 提交于 2020-04-10 06:02:50
问题 I am trying to understand and debug my code. I try to predict with a CNN model developed under tf2.0/tf.keras on GPU, but get those error messages. could someone help me to fix it? here is my environmental configuration enviroments: python 3.6.8 tensorflow-gpu 2.0.0-rc0 nvidia 418.x CUDA 10.0 cuDNN 7.6+** and the log file, 2019-09-28 13:10:59.833892: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2019-09-28 13:11:00.228025

Tensorflow 2.0 can't use GPU, something wrong in cuDNN? :Failed to get convolution algorithm. This is probably because cuDNN failed to initialize

浪尽此生 提交于 2020-04-10 06:01:11
问题 I am trying to understand and debug my code. I try to predict with a CNN model developed under tf2.0/tf.keras on GPU, but get those error messages. could someone help me to fix it? here is my environmental configuration enviroments: python 3.6.8 tensorflow-gpu 2.0.0-rc0 nvidia 418.x CUDA 10.0 cuDNN 7.6+** and the log file, 2019-09-28 13:10:59.833892: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcublas.so.10.0 2019-09-28 13:11:00.228025

Physx - linking problem with come functions (__imp_PxCreateBasePhysics referenced in function…)

会有一股神秘感。 提交于 2020-03-24 06:27:08
问题 I'm trying to implement PhysX into my game engine, but I have got some weird problems with linking the PhysX library. It always fails no matter what I do, but snippets from Nvidia works like a charm. I will try to describe what I did and I hope someone will find what I'm missing. First of all, I downloaded PhysX 4.1 from Github. Then I changed buildtools settings to those: <?xml version="1.0" encoding="utf-8"?> <preset name="vc15win64" comment="VC15 Win64 PhysX general settings"> <platform

How to pass device function as an input argument to host-side function?

柔情痞子 提交于 2020-03-06 04:50:40
问题 I just want to pass device function as argument of a host function, of cause, the host function then can launch some kernels with this device side function. I tried the usual C++ way (pass by pointer/reference) and the CUDA debugger told me the kernel cannot launch. Update: What I want to do is: __host__ void hostfunction(int a, int (*DeviceFunction)(int)) { /...do something.../ somekernel<<<blocks, threads>>>(int * in, DeviceFunction); } And launch the host with: hostfunction(x,

How do I run nvidia-smi on Windows?

帅比萌擦擦* 提交于 2020-02-28 18:37:18
问题 nvidia-smi executed in a command prompt in windows returns the following error C:\Users>nvidia-smi 'nvidia-smi' is not recognized as an internal or external command, operable program or batch file. Where is it located? CUDA is installed already. 回答1: Nvidia-SMI is stored by default in the following location: C:\Program Files\NVIDIA Corporation\NVSMI You can move to that directory and then run nvidia-smi from there. Unlike linux, it can't be executed by the command line in a different path.

I have a NVIDIA Quadro 2000 graphic card, and I want to install TensorFlow. Will it work?

耗尽温柔 提交于 2020-02-16 10:36:41
问题 I know Quadro 2000 is CUDA 2.1. My PC specs as follows: Quadro 2000 with 16GB RAM. Xeon(R) CPU W3520 @2.67GHz 2.66GHz Windows 10Pro. I want to use Tenserflow for Machine Learning, and Deep Learning. Let me know a little in-depth, as I am a beginner. 回答1: Your system is eligible to use TensorFlow but not with GPU because that requires GPU a having compute capability more than 3.0, and your GPU is only a compute capability 2.1 device. You can read more about it here. If you want to use GPU for

CUDA Error: out of memory - Python interpreter utilizes all GPU memory

北城以北 提交于 2020-02-04 04:31:07
问题 Even after rebooting the machine, there is >95% of used GPU Memory by python3 . Note that memory consumption keeps even if there are no running training scripts, and I've never used keras/tensorflow in the system environment, only with venv or in docker container. UPDATED: The last activity was the execution of NN test script with the following configurations: tensorflow==1.14.0 Keras==2.0.3 tf.autograph.set_verbosity(1) session_conf = tf.ConfigProto(intra_op_parallelism_threads=8, inter_op

Tensorflow CUDA - CUPTI error: CUPTI could not be loaded or symbol could not be found

和自甴很熟 提交于 2020-01-31 07:56:08
问题 I use the Tensorflow v 1.14.0. I work on Windows 10. And here is how relevant environment variables look in the PATH : C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.0\libnvvp C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common C:\Users\sinthes\AppData\Local\Programs\Python\Python37 C:\Users\sinthes\AppData\Local\Programs\Python\Python37\Scripts C:\Program Files\NVIDIA Corporation\NVIDIA NvDLISR C:\Program Files\NVIDIA