For a version without the collapsible details sections (so you can search the whole thing in your browser), click here.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from fastcore.utils import * | |
host = 8888,'localhost' | |
sock = start_server(*host) | |
print(f'Serving on {host}...') | |
while True: | |
conn,addr = sock.accept() | |
with conn: | |
data = conn.recv(1024) | |
print(data.decode()) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/usr/bin/env bash | |
set -e | |
echo | |
if ! [[ $(id -u) = 0 ]]; then | |
echo "Please run 'sudo ./install-wireguard.sh'" >&2 | |
exit 1 | |
fi | |
read -e -p "Use VPN for *all* internet traffic? [y/n] " -i n ROUTE_ALL |
Using this software requires agreeing to the license agreement.
cuDNN | CUDA9.0 | CUDA9.2 | CUDA10.0 | CUDA10.1 | CUDA10.2 | CUDA11.0 |
---|---|---|---|---|---|---|
8.0.2 | - | - | - | 10.1-8.0.2 (991.32MB) | 10.2-8.0.2 (1017.23MB) | 11.0-8.0.2 (1.32GB) |
7.6.1 | 9.0-7.6.1 (392.82MB) | 9.2-7.6.1 (396.89MB) | [10.0-7.6.1](https://developer.download.nvidia.com/compute/redist/cudnn/v7.6.1/cudnn-10.0-linux-x64-v7.6.1.34.tg |
- A simple note for how to start multi-node-training on slurm scheduler with PyTorch.
- Useful especially when scheduler is too busy that you cannot get multiple GPUs allocated, or you need more than 4 GPUs for a single job.
- Requirement: Have to use PyTorch DistributedDataParallel(DDP) for this purpose.
- Warning: might need to re-factor your own code.
- Warning: might be secretly condemned by your colleagues because using too many GPUs.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import torch | |
x = torch.randn(1, 3, 800, 1200) | |
n, _, h, w, = x.shape | |
rnd = torch.rand(2, n, 1, 2).sort(-1).values | |
r, c = torch.linspace(0, 1, h+2)[None, None], torch.linspace(0, 1, w+2)[None, None] | |
mask = (((r > rnd[0, :, :, :1]) & (r < rnd[0, :, :, 1:])).unsqueeze(-1) * | |
((c > rnd[1, :, :, :1]) & (c < rnd[1, :, :, 1:])).unsqueeze(-2))[:, :, 1:-1, 1:-1].expand_as(x) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# (c) Matthew Wardrop 2019; Licensed under the MIT license | |
# | |
# This script provides the ability to run a notebook in the same Python | |
# process as this script, allowing it to access to variables created | |
# by the notebook for other purposes. In most cases, this is of limited | |
# utility and not a best-practice, but there are some limited cases in | |
# which this capability is valuable, and this script was created for | |
# such cases. For all other cases, you are better off using the | |
# `nbconvert` execution API found @: | |
# https://nbconvert.readthedocs.io/en/latest/execute_api.html |