Skip to content

Instantly share code, notes, and snippets.

@soumith
Created April 23, 2018 14:28
Show Gist options
  • Save soumith/3b1f3c791c574ae1effa7eb448953230 to your computer and use it in GitHub Desktop.
Save soumith/3b1f3c791c574ae1effa7eb448953230 to your computer and use it in GitHub Desktop.
https://github.com/pytorch/pytorch/pull/3505 improvements to ModuleList and ParameterList classes
https://github.com/pytorch/pytorch/pull/3501 implement `__dir__`for Variable
https://github.com/pytorch/pytorch/pull/3435 Implemented NCCL Distributed Backend for PyTorch with new dist APIs
https://github.com/pytorch/pytorch/pull/3384 Allow in-place operations on views
https://github.com/pytorch/pytorch/pull/3341 add gumbel_softmax, based on Eric Jang's implementation
https://github.com/pytorch/pytorch/pull/3263 Add torch.take and Tensor.put_
https://github.com/pytorch/pytorch/pull/2953 WIP: add numpy() and from_numpy() to HalfTensor
https://github.com/pytorch/pytorch/pull/6528 Support arbitrary number of batch dimensions in *FFT
https://github.com/pytorch/pytorch/pull/6470 Separate cuda-ness from dtype.
https://github.com/pytorch/pytorch/pull/6467 [Re-checkpointing] Autograd container for trading compute for memory
https://github.com/pytorch/pytorch/pull/6425 bottleneck supports better user-provided arguments
https://github.com/pytorch/pytorch/pull/6327 Add total_length option to pad_packed_sequence
https://github.com/pytorch/pytorch/pull/6307 Implement torch.einsum (fixes #1889)
https://github.com/pytorch/pytorch/pull/6283 Add string-style devices to all tensors.
https://github.com/pytorch/pytorch/pull/6272 [ready] Implement log2 and log10 in PyTorch
https://github.com/pytorch/pytorch/pull/6145 Introduce torch.layout and split layout from dtypes.
https://github.com/pytorch/pytorch/pull/6136 [WIP] randint function
https://github.com/pytorch/pytorch/pull/6113 Support returning dictionaries in DataParallel
https://github.com/pytorch/pytorch/pull/6093 Add underscore to nn.init.* and deprecate the original ones
https://github.com/pytorch/pytorch/pull/6058 Create safe and unsafe versions of sparse_coo_tensor
https://github.com/pytorch/pytorch/pull/6038 Enable TensorDataset to get any number of tensors
https://github.com/pytorch/pytorch/pull/5980 Support batch LowerCholeskyTransform
https://github.com/pytorch/pytorch/pull/5968 Group Normalization
https://github.com/pytorch/pytorch/pull/5919 add mpi support for DDP
https://github.com/pytorch/pytorch/pull/5856 [fft][2 of 3] Forward for fft methods
https://github.com/pytorch/pytorch/pull/5824 introduce shape_as_tensor and reshape_from_tensor_shape
https://github.com/pytorch/pytorch/pull/5781 [REDO] Add torch.sparse_coo_tensor factory.
https://github.com/pytorch/pytorch/pull/5764 Support N-D tensors in Bilinear
https://github.com/pytorch/pytorch/pull/5668 Add torch.empty, torch.full and new_ size Tensor factory methods.
https://github.com/pytorch/pytorch/pull/5622 Alias torch.diagonal, torch.diagflat
https://github.com/pytorch/pytorch/pull/5600 Make torch.arange consistent with numpy.arange
https://github.com/pytorch/pytorch/pull/5583 Allow indexing tensors with both CPU and CUDA tensors
https://github.com/pytorch/pytorch/pull/5575 Implement torch.reshape and Tensor.reshape
https://github.com/pytorch/pytorch/pull/5555 Add set_grad_enabled as context manager and function
https://github.com/pytorch/pytorch/pull/5540 add: padding_value to `torch.nn.utils.rnn.pad_sequence`
https://github.com/pytorch/pytorch/pull/5503 Add per-element unique op for CPU
https://github.com/pytorch/pytorch/pull/5501 Set default amsgrad param in adam optimizer
https://github.com/pytorch/pytorch/pull/5466 torch.load() / torch.save() support arbitrary file-like object
https://github.com/pytorch/pytorch/pull/5453 Added 3d grid sampler (for volumetric transformer networks)
https://github.com/pytorch/pytorch/pull/5444 Add dtype to torch.Tensor constructors and accept them in set_default_tensor_type
https://github.com/pytorch/pytorch/pull/5441 Support type conversion via type(dtype).
https://github.com/pytorch/pytorch/pull/5419 Introduce torch.tensor (was torch.autograd.variable).
https://github.com/pytorch/pytorch/pull/5415 Set python random seed in workers
https://github.com/pytorch/pytorch/pull/5408 Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python
https://github.com/pytorch/pytorch/pull/5393 [ready] Add logdet and slogdet
https://github.com/pytorch/pytorch/pull/5378 Add faq on cuda memory management and dataloder worker seeds
https://github.com/pytorch/pytorch/pull/5350 Embedding.from_pretrained factory
https://github.com/pytorch/pytorch/pull/5348 Added torch.distributed.launch module for easier multi-proc/node distributed job launching
https://github.com/pytorch/pytorch/pull/5335 Improve sparse variable printing.
https://github.com/pytorch/pytorch/pull/5328 implement double backwards for MaxPool3d
https://github.com/pytorch/pytorch/pull/5324 Improve CUDA extension support
https://github.com/pytorch/pytorch/pull/5300 Make ReduceLROnPlateau serializable.
https://github.com/pytorch/pytorch/pull/5294 Configurable flushing denormal numbers on CPU
https://github.com/pytorch/pytorch/pull/5273 Implement torch.isnan
https://github.com/pytorch/pytorch/pull/5233 Include __delitem__ for Sequential
https://github.com/pytorch/pytorch/pull/5216 Implement torch.util.bottleneck
https://github.com/pytorch/pytorch/pull/5113 Support calling pack_padedd_sequence with a Variable lengths
https://github.com/pytorch/pytorch/pull/5017 Expose sparse variable sspaddmm
https://github.com/pytorch/pytorch/pull/5016 Expose sparse variable addmm, addmm_
https://github.com/pytorch/pytorch/pull/4999 Replace async with non_blocking for Python 3.7
https://github.com/pytorch/pytorch/pull/4978 DDP: coalescing many little broadcasts to improve performance
https://github.com/pytorch/pytorch/pull/4949 torch.set_num_threads sets MKL option too
https://github.com/pytorch/pytorch/pull/4931 Add assignment support for Sequential
https://github.com/pytorch/pytorch/pull/4922 [ready] Layer Normalization
https://github.com/pytorch/pytorch/pull/4921 Release NCCL distributed backend from experimental
https://github.com/pytorch/pytorch/pull/4891 Added mixed-precision support in distributed training
https://github.com/pytorch/pytorch/pull/4886 Deprecate out-of-place resize and resize_as on Variables.
https://github.com/pytorch/pytorch/pull/4883 Fix torch.pstrf on Variables
https://github.com/pytorch/pytorch/pull/4882 Implement sparse tensor and variable norm(value)
https://github.com/pytorch/pytorch/pull/4876 Addition of ExponentialFamily
https://github.com/pytorch/pytorch/pull/4873 Added Poisson self KL + Bernoulli/Poisson KL
https://github.com/pytorch/pytorch/pull/4795 Enabling Infiniband support for Gloo data channel with auto IB detection
https://github.com/pytorch/pytorch/pull/4746 Heuristic-based autograd execution order
https://github.com/pytorch/pytorch/pull/4683 Add print support for sparse variables
https://github.com/pytorch/pytorch/pull/4667 Local Response Normalization
https://github.com/pytorch/pytorch/pull/4654 NLLLoss: current code works with dim = 3, so I added it to dim checks
https://github.com/pytorch/pytorch/pull/4512 Implement backward pass for pack_padded_sequence
https://github.com/pytorch/pytorch/pull/4511 Methods for checking CUDA memory usage
https://github.com/pytorch/pytorch/pull/4496 Padding_idx in Embedding supports negative indexing
https://github.com/pytorch/pytorch/pull/4491 Add Slicing capabilities for Sequential, ModuleList and ParameterList
https://github.com/pytorch/pytorch/pull/4409 RNN support has been implemented
https://github.com/pytorch/pytorch/pull/4367 Fix creating tensors with np.longlong array
https://github.com/pytorch/pytorch/pull/4350 Adding torch.expm1() and its inplace function
https://github.com/pytorch/pytorch/pull/4318 Fix btrifact for variables
https://github.com/pytorch/pytorch/pull/4298 Enable functional torch.where.
https://github.com/pytorch/pytorch/pull/4259 Implement torch.where(condition, x, y) CPU Variable.
https://github.com/pytorch/pytorch/pull/4209 Make import work even if 'tools' is available in Python path
https://github.com/pytorch/pytorch/pull/4163 support RNN export
https://github.com/pytorch/pytorch/pull/4135 Add an option to suppress download progress
https://github.com/pytorch/pytorch/pull/4095 Add python only default init expression; Implement stft, hann/hamming/bartlett window.
https://github.com/pytorch/pytorch/pull/4075 Implement neg for all types
https://github.com/pytorch/pytorch/pull/4062 Allow .view on noncontig tensors when certain conditions are met
https://github.com/pytorch/pytorch/pull/4034 added AMSgrad optimizer to Adam and SparseAdam
https://github.com/pytorch/pytorch/pull/3972 Implements gradients calculation for trtrs
https://github.com/pytorch/pytorch/pull/3875 Pad sequences and Pack sequences
https://github.com/pytorch/pytorch/pull/3837 - added size_splits to functional
https://github.com/pytorch/pytorch/pull/3820 Make integer parameters and buffers immune to float(), double() and half()
https://github.com/pytorch/pytorch/pull/3816 Add determinant function on variable; Add backward on svd
https://github.com/pytorch/pytorch/pull/3734 Add cudaEvent support to the profiler
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment