How to pass data bigger than the VRAM size into the GPU?
问题 I am trying to pass more data into my GPU than I have VRAM, which results in the following error. CudaAPIError: Call to cuMemAlloc results in CUDA_ERROR_OUT_OF_MEMORY I created this code to recreate the problem: from numba import cuda import numpy as np @cuda.jit() def addingNumbers (big_array, big_array2, save_array): i = cuda.grid(1) if i < big_array.shape[0]: for j in range (big_array.shape[1]): save_array[i][j] = big_array[i][j] * big_array2[i][j] big_array = np.random.random_sample(