nvidia

How do I get the NVIDIA core temperature in an integer value?

别说谁变了你拦得住时间么 提交于 2019-12-18 09:38:09
问题 I am taking a Arduino microcontroller class and I'm working on my final project: an automated computer cooling system that works according to case temperature. I was unable to get my NVIDIA GPU core temp using the following sources: this MSDN link or this NVIDIA link. How can I get the value of the temperature of my GPU? My knowledge in C# is basic and i couldn't make heads from tails on that manual or code examples in MSDN. 回答1: I'm gonna go ahead and answer my own question after a long time

CUDA: Thread ID assignment in 2D grid

柔情痞子 提交于 2019-12-18 09:27:43
问题 Let's suppose I have a kernel call with a 2D grid, like so: dim3 dimGrid(x, y); // not important what the actual values are dim3 dimBlock(blockSize, blockSize); myKernel <<< dimGrid, dimBlock >>>(); Now I've read that multidimensional grids are merely meant to ease programming - the underlying hardware will only ever use 1D linearly cached memory (unless you use texture memory, but that's not relevant here). My question is: In what order will the threads be assigned to the grid indices during

TensorFlow in nvidia-docker: failed call to cuInit: CUDA_ERROR_UNKNOWN

假如想象 提交于 2019-12-18 06:31:26
问题 I have been working on getting an application that relies on TensorFlow to work as a docker container with nvidia-docker . I have compiled my application on top of the tensorflow/tensorflow:latest-gpu-py3 image. I run my docker container with the following command: sudo nvidia-docker run -d -p 9090:9090 -v /src/weights:/weights myname/myrepo:mylabel When looking at the logs through portainer I see the following: 2017-05-16 03:41:47.715682: W tensorflow/core/platform/cpu_feature_guard.cc:45]

NV_STEREO_IMAGE_SIGNATURE and DirectX 10/11 (nVidia 3D Vision)

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-18 05:11:13
问题 I'm trying to use SlimDX and DirectX10 or 11 to control the stereoization process on the nVidia 3D Vision Kit. Thanks to this question I've been able to make it work in DirectX 9. However, due to some missing methods I've been unable to make it work under DirectX 10 or 11. The algorithm goes like this: Render left eye image Render right eye image Create a texture able to contain them both PLUS an extra row (so the texture size would be 2 * width, height + 1) Write this NV_STEREO_IMAGE

How to check if pytorch is using the GPU?

纵饮孤独 提交于 2019-12-17 21:25:41
问题 I would like to know if pytorch is using my GPU. It's possible to detect with nvidia-smi if there is any activity from the GPU during the process, but I want something written in a python script. Is there a way to do so? 回答1: This is going to work : In [1]: import torch In [2]: torch.cuda.current_device() Out[2]: 0 In [3]: torch.cuda.device(0) Out[3]: <torch.cuda.device at 0x7efce0b03be0> In [4]: torch.cuda.device_count() Out[4]: 1 In [5]: torch.cuda.get_device_name(0) Out[5]: 'GeForce GTX

How to perform Hadamard product with CUBLAS on complex numbers?

拈花ヽ惹草 提交于 2019-12-17 21:23:58
问题 I need the compute the element wise multiplication of two vectors (Hadamard product) of complex numbers with NVidia CUBLAS. Unfortunately, there is no HAD operation in CUBLAS. Apparently, you can do this with the SBMV operation, but it is not implemented for complex numbers in CUBLAS. I cannot believe there is no way to achieve this with CUBLAS. Is there any other way to achieve that with CUBLAS, for complex numbers ? I cannot write my own kernel, I have to use CUBLAS (or another standard

Compile cuda code for CPU

99封情书 提交于 2019-12-17 20:17:55
问题 I'm study cuda 5.5 but i don't have any Nvidia GPU. In old version of nvcc have a flag --multicore to compile cuda code for CPU. In the new version of nvcc, what's is the option?? I'm working on Linux. 回答1: CUDA toolkits since at least CUDA 4.0 have not supported an ability to run cuda code without a GPU. If you simply want to compile code, refer to this question. If you want to run CUDA codes compiled with CUDA 5.5, you will need a CUDA capable GPU. If you're willing to use older CUDA

How does CUDA assign device IDs to GPUs?

余生颓废 提交于 2019-12-17 07:30:53
问题 When a computer has multiple CUDA-capable GPUs, each GPU is assigned a device ID . By default, CUDA kernels execute on device ID 0 . You can use cudaSetDevice(int device) to select a different device. Let's say I have two GPUs in my machine: a GTX 480 and a GTX 670. How does CUDA decide which GPU is device ID 0 and which GPU is device ID 1 ? Ideas for how CUDA might assign device IDs (just brainstorming): descending order of compute capability PCI slot number date/time when the device was

What is the correct version of CUDA for my nvidia driver?

与世无争的帅哥 提交于 2019-12-17 02:28:49
问题 I am using ubuntu 14.04. I want to install CUDA. But I don't know which version is good for my laptop. I trace my driver that is: $cat /proc/driver/nvidia/version NVRM version: NVIDIA UNIX x86_64 Kernel Module 304.125 Mon Dec 1 19:58:28 PST 2014 GCC version: gcc version 4.8.2 (Ubuntu 4.8.2-19ubuntu1) I tried to install CUDA cuda-linux64-rel-7.0.28-19326674 but when I test by command: ./deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking)

(CUDA C) Why is it not printing out the value copied from device memory?

☆樱花仙子☆ 提交于 2019-12-14 04:26:57
问题 I'm learning CUDA right now through the training slides provided by NVIDIA. They have a sample program that shows how you could add two integers. The code is below: #include <stdio.h> __global__ void add(int *a, int *b, int *c) { *c = *a+*b; } int main(void) { int a, b, c; // Host copies of a, b, c int *d_a, *d_b, *d_c; // Device copies of a, b, c size_t size = sizeof(int); //Allocate space for device copies of a, b, c cudaMalloc((void**)&d_a, size); cudaMalloc((void**)&d_b, size); cudaMalloc