Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from numba import cuda | |
import ctypes | |
import numpy as np | |
import torch | |
def devndarray2torch(dev_arr): | |
t = torch.empty(size=dev_arr.shape, dtype=dtyp).cuda() | |
ctx = cuda.cudadrv.driver.driver.get_context() | |
# constant value of #bytes in case of float32 = 4 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
BEFORE INSTALLING THE CUML, PLEASE MAKE SURE YOU HAVE FOLLOWED THE ABOVE STEPS FOR CUDF. CUDF SHOULD BE WORKING... | |
Step 1: Install the cuml and its depandencies. | |
!apt install libopenblas-base libomp-dev | |
!pip install cuml-cuda100 | |
# import cuml at this point, will give libcuml.so not found error. # | |
NOTE: Step2 is optional and is just for information, you can fast forward to Step3 directly to work quickely. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Steps 1: Just to verify that you have all requirement satisfied, needed by rapids. | |
* check the gpu card (>=Pascal arch) | |
!nvidia-smi | |
* check cuda version installed (>=9.2) | |
!nvcc -V | |
*check the python and pip version (python==3.6) | |
!python -V; pip -V |
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.