Created
May 29, 2021 06:31
-
-
Save rish-16/5d26c3756fd38fddc1982acec310454b to your computer and use it in GitHub Desktop.
A guide on Colab TPU training using PyTorch XLA (Part 7)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
device = xm.xla_device() | |
# define some hyper-params you'd feed into your model | |
in_channels = ... | |
random_param = ... | |
# create model using appropriate hyper-params | |
net = MyCustomNet(...) | |
# seat it atop the TPU worker device and switch it to train mode | |
net = net.to(device).train() | |
# get the loss function and optimizer – use anything | |
criterion = nn.CrossEntropyLoss() | |
optimizer = optim.Adam(net.parameters(), lr=..., betas=(...)) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment