In this article, I will share some of my experience on installing NVIDIA driver and CUDA on Linux OS. Here I mainly use Ubuntu as example. Comments for CentOS/Fedora are also provided as much as I can.
mklojiknm | |
,olmlm,; |
import torch | |
input = torch.randn(1, 2, 1025); input | |
##### ENCODER | |
# layer-1 | |
downsample_1a = torch.nn.Conv1d(2, 20, 5 , stride=1, padding=0) | |
downsample_1b = torch.nn.Conv1d(2, 20, 50 , stride=1, padding=0) | |
downsample_1c = torch.nn.Conv1d(2, 20, 256 , stride=1, padding=0) | |
downsample_1d = torch.nn.Conv1d(2, 20, 512 , stride=1, padding=0) |
Whether you're trying to give back to the open source community or collaborating on your own projects, knowing how to properly fork and generate pull requests is essential. Unfortunately, it's quite easy to make mistakes or not know what you should do when you're initially learning the process. I know that I certainly had considerable initial trouble with it, and I found a lot of the information on GitHub and around the internet to be rather piecemeal and incomplete - part of the process described here, another there, common hangups in a different place, and so on.
In an attempt to coallate this information for myself and others, this short tutorial is what I've found to be fairly standard procedure for creating a fork, doing your work, issuing a pull request, and merging that pull request back into the original project.
Just head over to the GitHub page and click the "Fork" button. It's just that simple. Once you've done that, you can use your favorite git client to clone your repo or j
import torch as pt | |
import pytorch_lightning as pl | |
####################################################################### | |
class FlashModel(pl.LightningModule): | |
"""This defines a MODEL""" | |
def __init__(self, num_layers: int = 3): | |
super().__init__() | |
self.layer1 = pt.nn.Linear() | |
self.layer2 = pt.nn.Linear() |
### DATALOADERS ################################################################## | |
# When building DataLoaders. Set `num_workers>0` and `pin_memory=True` | |
DataLoader(dataset, num_workers=8, pin_memory=True) | |
### num_workers ################################################################## | |
# num_workers depends on the batch size and the machine | |
# A general place to start is to set num_workers = number of CPUs in the machine. | |
# Increasing num_workers all increases the CPU usage | |
# BEST TIP: Increase num_workers slowly and stop when there is no performance increase. |
# A LightningModule ORGANIZES the PyTorch code into the following modules: | |
# 1. Computations (init) | |
# 2. Training loop (training_step) | |
# 3. Validation loop (validation_step) | |
# 4. Test loop (test_step) | |
# 5. Optimizers (configure_optimizers) | |
############################################################################## | |
model = FlashModel() | |
trainer = Trainer() |
import os | |
import torch | |
import torch.nn.Functional as F | |
from torchvision import datasets, transforms | |
from torch.utils.data import DataLoader | |
import pytorch_lightning as pl | |
########################################################################################### |
import torch | |
import torch.nn.Functional as F | |
import pytorch_lightning as pl | |
########################################################################################### | |
## Pytorch_Lightning version | |
## | |
class FlashModel(pl.LightningModule): | |
"""DOCSTRING""" | |
def __init__(self, model): |
import torch | |
import torch.nn.Functional as F | |
import pytorch_lightning as pl | |
########################################################################################### | |
class FlashModel(pl.LightningModule): | |
"""DOCSTRING""" | |
def __init__(self, model): | |
super().__init__() |