nvidia

Python pyopencl DLL load failed even with latest drivers

旧街凉风 提交于 2019-12-21 04:14:32
问题 I've installed the latest CUDA and driver for my GPU. I'm using Python 2.7.10 on Win7 64bit. I tried installing pyopencl from: a . the unofficial windows binaries at http://www.lfd.uci.edu/~gohlke/pythonlibs/#pyopencl b . by compiling my own after getting the sources from https://pypi.python.org/pypi/pyopencl The installation was successful on both cases but I get the same error message once I try to import it: >>> import pyopencl Traceback (most recent call last): File "<stdin>", line 1, in

HOW TO: Import TensorFlow in Jupyter Notebook from Conda with GPU support?

本小妞迷上赌 提交于 2019-12-20 10:57:54
问题 I have installed tensorflow using the anaconda environment as mentioned in the tensorflow website and after doing my python installation path changed. dennis@dennis-HP:~$ which python /home/dennis/anaconda2/bin/python And Jupyter was installed. I assumed that if I was able to import and use tensorflow in the conda environment that I will be able to do the same in Jupyter. But that was not the case - Importing tensorflow in my system (without activating the environment) dennis@dennis-HP:~$

Opening a fullscreen OpenGL window

ⅰ亾dé卋堺 提交于 2019-12-20 10:47:22
问题 I am tring to open an OpenGL full screen window using GLFW on linux red-hat. I have a desktop that spans two monitors with total resolution of 3840*1080. I have two problems: 1. The window is opened just on one monitor with maximum window width of 1920 (the width of a single monitor). 2. The maximum height of the window is 1003 (which I think is the height of the screen minus the height of the task bar and the top bar). This is the code I use to open the window: if (glfwInit() == GL_FALSE)

Does GPL code linking with proprietary library depend which is created first? [closed]

老子叫甜甜 提交于 2019-12-20 10:28:24
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . Microsoft creates their windows and MFC DLL library, etc. An open source develop write a new MFC application and release the source code as GPL. The app has to link with the MS DLL/libraries to run in Windows, but I don't think anyone can argue that we now have the right to force the Microsoft's GPL their DLL.

How to get card specs programmatically in CUDA

戏子无情 提交于 2019-12-20 10:00:06
问题 I'm just starting out with CUDA. Is there a way of getting the card specs programmatically? 回答1: You can use the cudaGetDeviceCount and cudaGetDeviceProperties APIs. void DisplayHeader() { const int kb = 1024; const int mb = kb * kb; wcout << "NBody.GPU" << endl << "=========" << endl << endl; wcout << "CUDA version: v" << CUDART_VERSION << endl; wcout << "Thrust version: v" << THRUST_MAJOR_VERSION << "." << THRUST_MINOR_VERSION << endl << endl; int devCount; cudaGetDeviceCount(&devCount);

Explanation of CUDA C and C++

回眸只為那壹抹淺笑 提交于 2019-12-20 09:05:15
问题 Can anyone give me a good explanation as to the nature of CUDA C and C++? As I understand it, CUDA is supposed to be C with NVIDIA's GPU libraries. As of right now CUDA C supports some C++ features but not others. What is NVIDIA's plan? Are they going to build upon C and add their own libraries (e.g. Thrust vs. STL) that parallel those of C++? Are they eventually going to support all of C++? Is it bad to use C++ headers in a .cu file? 回答1: CUDA C is a programming language with C syntax.

How Nvidia NCCL build the GPU topology [closed]

烈酒焚心 提交于 2019-12-20 07:47:48
问题 Closed . This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 9 days ago . I am reading NCCL code on Nvidia Github, it is to hard to understand how topology is build. Is there any material or paper can explain this process. I think maybe Nvidia has released some paper before is also helpful for me. Is there any reference paper can explain function ncclTopoCompute(). Thanks

CUDA - Creating objects in kernel and using them at host [duplicate]

限于喜欢 提交于 2019-12-20 06:09:10
问题 This question already has an answer here : How to copy the memory allocated in device function back to main memory (1 answer) Closed 3 years ago . I need to use polymorphism in my kernels. The only way of doing this is to create those objects on the device (to make a virtual mehod table available at the device). Here's the object being created class Production { Vertex * boundVertex; } class Vertex { Vertex * leftChild; Vertex * rightChild; } Then on the host I do: Production* dProd;

Tobii Eye Tracker

前提是你 提交于 2019-12-20 04:56:07
问题 We are trying to connect our Tobii Eye Tracker to our Ubuntu OS 16.04.6 LTS Nvidia Jetson TX2 module. However, when we want to pip install tobii_research we keep getting an error that says that there are not matching distributions found for it. Has anyone had any success doing this? We are using a virtual environment for python 3.5 and we are trying to install psychopy but it keeps saying that it is failing with error code 1 in /tmp/pip-install-cdg_if0d/psychopy. Do we need psychopy inorder

How to enable/disable a specific graphic card?

佐手、 提交于 2019-12-20 03:02:27
问题 I'm working on a "fujitsu" machine. It has 2 GPUs installed: Quadro 2000 and Tesla C2075. The Quadro GPU has 1 GB RAM and Tesla GPU has 5GB of it. (I checked using the output of nvidia-smi -q). When I run nvidia-smi, the output shows 2 GPUs, but the Tesla ones display is shown as off. I'm running a memory intensive program and would like to use 5 GB of RAM available, but whenever I run a program, it seems to be using the Quadro GPU. Is there some way to use a particular GPU out of the 2 in a