cupy

Cupy get error in multithread.pool if GPU already used

让人想犯罪 __ 提交于 2019-12-10 11:22:47
问题 I tried to use cupy in two parts of my program, one of them being parallelized with a pool. I managed to reproduce it with a simple example: import cupy import numpy as np from multiprocessing import pool def f(x): return cupy.asnumpy(2*cupy.array(x)) input = np.array([1,2,3,4]) print(cupy.asnumpy(cupy.array(input))) print(np.array(list(map(f, input)))) p = pool.Pool(4) output = p.map(f, input) p.close() p.join() print(output) The output is the following: [1 2 3 4] [2 4 6 8] Exception in

Cupy OutOfMemoryError when trying to cupy.load larger dimension .npy files in memory map mode, but np.load works fine

家住魔仙堡 提交于 2019-12-08 03:31:16
问题 I'm trying to load some larger .npy files in cupy with memory mapped mode, but I keep running into OutOfMemoryError . I thought that since it's being opened in memory mapped mode, this operation shouldn't take much memory since a memory map doesn't actually load the whole array into memory. I can load these files with np.load just fine, this only seems to happen with cupy.load. My enviroment is Google Colab, with the Tesla K80 GPU. It has about 12 gigs CPU ram, 12 gigs GPU ram, and 350 gb

Cupy get error in multithread.pool if GPU already used

佐手、 提交于 2019-12-06 09:06:30
I tried to use cupy in two parts of my program, one of them being parallelized with a pool. I managed to reproduce it with a simple example: import cupy import numpy as np from multiprocessing import pool def f(x): return cupy.asnumpy(2*cupy.array(x)) input = np.array([1,2,3,4]) print(cupy.asnumpy(cupy.array(input))) print(np.array(list(map(f, input)))) p = pool.Pool(4) output = p.map(f, input) p.close() p.join() print(output) The output is the following: [1 2 3 4] [2 4 6 8] Exception in thread Thread-3: Traceback (most recent call last): File "/usr/lib/python3.6/threading.py", line 916, in

How to use CUDA pinned “zero-copy” memory for a memory mapped file?

◇◆丶佛笑我妖孽 提交于 2019-12-05 02:42:33
问题 Objective/Problem In Python, I am looking for a fast way to read/write data from a memory mapped file to a GPU. In a previous SO overflow post [ Cupy OutOfMemoryError when trying to cupy.load larger dimension .npy files in memory map mode, but np.load works fine ] Where it is mentioned this is possible using CUDA pinned "zero-copy" memory. Furthermore, it seems that this method was developed by this person [ cuda - Zero-copy memory, memory-mapped file ] though that person was working in C++.

How to use CUDA pinned “zero-copy” memory for a memory mapped file?

前提是你 提交于 2019-12-03 16:27:22
Objective/Problem In Python, I am looking for a fast way to read/write data from a memory mapped file to a GPU. In a previous SO overflow post [ Cupy OutOfMemoryError when trying to cupy.load larger dimension .npy files in memory map mode, but np.load works fine ] Where it is mentioned this is possible using CUDA pinned "zero-copy" memory. Furthermore, it seems that this method was developed by this person [ cuda - Zero-copy memory, memory-mapped file ] though that person was working in C++. My previous attempts have been with Cupy, but I am open to any cuda methods. What I have tried so far I

Is it possible to install cupy on google colab?

社会主义新天地 提交于 2019-12-01 06:39:48
I am trying to run chainer with GPU on google colab. This requires cupy installed however I fail to install this properly as it cannot find the cuda environment in my colab vm. Error message as follows... Collecting cupy Downloading cupy-2.4.0.tar.gz (1.7MB) 100% |████████████████████████████████| 1.7MB 740kB/s Complete output from command python setup.py egg_info: cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /tmp/tmpds3ikncy/a.cpp:1:10: fatal error: cublas_v2.h: No such file or directory #include ^~~~~~~~~~~~~ compilation terminated. Options:

Is it possible to install cupy on google colab?

无人久伴 提交于 2019-12-01 04:24:29
问题 I am trying to run chainer with GPU on google colab. This requires cupy installed however I fail to install this properly as it cannot find the cuda environment in my colab vm. Error message as follows... Collecting cupy Downloading cupy-2.4.0.tar.gz (1.7MB) 100% |████████████████████████████████| 1.7MB 740kB/s Complete output from command python setup.py egg_info: cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ /tmp/tmpds3ikncy/a.cpp:1:10: