Skip to content

Instantly share code, notes, and snippets.

View philtomson's full-sized avatar

Phil Tomson philtomson

View GitHub Profile
@philtomson
philtomson / gist:342e5f8330a46eb2793ffa50d90ca575
Created February 2, 2024 02:38
PYTORCH TIP: mixed precision training
"""
Use torch.cuda.amp in PyTorch for mixed precision training.
This method mixes 32-bit and 16-bit data to reduce memory use and speed up model training,
without much loss in accuracy.
It takes advantage of the quick computing of 16-bit data and controls precision by handling specific operations in 32-bit.
This approach offers a balance between speed and accuracy in training models.
"""
import torch
from torch.cuda.amp import autocast, GradScaler