Skip to content

Instantly share code, notes, and snippets.

View aadps's full-sized avatar
🎯
Focusing

Xin Chen aadps

🎯
Focusing
  • AADPS
  • Beijing
View GitHub Profile
import torch
from torch.optim.optimizer import Optimizer, required
class SGD_MC(Optimizer):
r"""Implements stochastic gradient descent (optionally with momentum).
Nesterov momentum is based on the formula from
`On the importance of initialization and momentum in deep learning`__.
Args:
params (iterable): iterable of parameters to optimize or dicts defining
#!/bin/bash
# Intel MKL is now freely available even for commercial use. This script
# attempts to install the MKL package automatically from Intel's repository.
#
# For manual repository setup instructions, see:
# https://software.intel.com/articles/installing-intel-free-libs-and-python-yum-repo
# https://software.intel.com/articles/installing-intel-free-libs-and-python-apt-repo
#
# For other package managers, or non-Linux platforms, see: