Skip to content

Instantly share code, notes, and snippets.

@zou3519
Created October 19, 2017 00:09
Show Gist options
  • Save zou3519/710386c8f021ec8f12e1c71201d064d6 to your computer and use it in GitHub Desktop.
Save zou3519/710386c8f021ec8f12e1c71201d064d6 to your computer and use it in GitHub Desktop.
NCCL_DEBUG=INFO nccl 1.3.5
(root) pytorch@pytorch-desktop:~/multigpu-test$
(root) pytorch@pytorch-desktop:~/multigpu-test$ NCCL_DEBUG=INFO python test-simple.py
Checkpoint 1
Checkpoint 2
INFO NCCL debug level set to INFO
NCCL version 1.3.5 compiled with CUDA 9.0
INFO rank 0 using buffSize = 2097152
INFO rank 0 using device 0 (0000:0C:00.0)
INFO rank 1 using buffSize = 2097152
INFO rank 1 using device 1 (0000:0D:00.0)
INFO rank access 0 -> 0 via common device
INFO rank access 0 -> 1 via P2P device mem
INFO rank access 1 -> 0 via P2P device mem
INFO rank access 1 -> 1 via common device
INFO Global device memory space is enabled
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment