Created
April 18, 2018 18:52
-
-
Save soumith/d9698187945535daeee78d722d7b3a77 to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
https://github.com/pytorch/pytorch/pull/6692 Update from Facebook | |
https://github.com/pytorch/pytorch/pull/6681 fix broken code from rebasing | |
https://github.com/pytorch/pytorch/pull/6677 Make torch.backends.mkl.is_available() work without importing | |
https://github.com/pytorch/pytorch/pull/6673 Scope variables inside the dataloader | |
https://github.com/pytorch/pytorch/pull/6670 Update tensors.rst Tensor introduction | |
https://github.com/pytorch/pytorch/pull/6665 Fix LSTM and GRU parameters description | |
https://github.com/pytorch/pytorch/pull/6664 Adding dispatch to Tensors | |
https://github.com/pytorch/pytorch/pull/6659 Fix some loss output sizes | |
https://github.com/pytorch/pytorch/pull/6656 randperm supports n=0 | |
https://github.com/pytorch/pytorch/pull/6654 Always compute gradients for the gradcheck inputs | |
https://github.com/pytorch/pytorch/pull/6648 Support gpu triangle solve | |
https://github.com/pytorch/pytorch/pull/6647 Gltensor fix | |
https://github.com/pytorch/pytorch/pull/6642 Allow traces to call @script functions | |
https://github.com/pytorch/pytorch/pull/6641 Codemod to update our codebase to 0.4 standard | |
https://github.com/pytorch/pytorch/pull/6637 Bind 0-dim variables without requires grad to int64/double similar to | |
https://github.com/pytorch/pytorch/pull/6629 Add Module.to | |
https://github.com/pytorch/pytorch/pull/6628 Make dtype in .to positional rather than kwarg only | |
https://github.com/pytorch/pytorch/pull/6617 nll_loss: Fixed text of error message in case of unexpected target size | |
https://github.com/pytorch/pytorch/pull/6616 [Adagrad optimization] adding initial_accumulator_value parameter to Adagrad | |
https://github.com/pytorch/pytorch/pull/6599 Split set_default_tensor_type(dtype) into set_default_dtype(dtype). | |
https://github.com/pytorch/pytorch/pull/6592 Change to ldd parsing regex | |
https://github.com/pytorch/pytorch/pull/6588 Add tensor.to(device) method. | |
https://github.com/pytorch/pytorch/pull/6575 Fix torch.nn.RNN parameters description | |
https://github.com/pytorch/pytorch/pull/6573 Add dtypes (with reasonable defaults) to sum, prod, cumsum, cumprod. | |
https://github.com/pytorch/pytorch/pull/6553 Restore allow_unused functionality | |
https://github.com/pytorch/pytorch/pull/6541 Fix regression that STFT has no backward. | |
https://github.com/pytorch/pytorch/pull/6534 Conda binary changes | |
https://github.com/pytorch/pytorch/pull/6528 Support arbitrary number of batch dimensions in *FFT | |
https://github.com/pytorch/pytorch/pull/6517 More precise digamma | |
https://github.com/pytorch/pytorch/pull/6507 Adding autofunction entry for torch.randint | |
https://github.com/pytorch/pytorch/pull/6485 Add SmallVector from llvm | |
https://github.com/pytorch/pytorch/pull/6484 Sync current changes in ACL backend | |
https://github.com/pytorch/pytorch/pull/6470 Separate cuda-ness from dtype. | |
https://github.com/pytorch/pytorch/pull/6467 [Re-checkpointing] Autograd container for trading compute for memory | |
https://github.com/pytorch/pytorch/pull/6463 [pytorch] Fix signed random_ | |
https://github.com/pytorch/pytorch/pull/6438 Fix reflection padding boundary checks | |
https://github.com/pytorch/pytorch/pull/6426 Slice (instead of copy) when indexing by a zero-dim tensor | |
https://github.com/pytorch/pytorch/pull/6425 bottleneck supports better user-provided arguments | |
https://github.com/pytorch/pytorch/pull/6420 Use string comparison in OS check | |
https://github.com/pytorch/pytorch/pull/6418 [pytorch] Fix clamp is missing kwarg out (#6028) | |
https://github.com/pytorch/pytorch/pull/6409 Fix incorrect error message in convolution_expand_param_if_needed | |
https://github.com/pytorch/pytorch/pull/6405 Quote arguments only when possible | |
https://github.com/pytorch/pytorch/pull/6401 Add CUDA headers | |
https://github.com/pytorch/pytorch/pull/6396 Fixes #6386, Use copies instead of symbolic files | |
https://github.com/pytorch/pytorch/pull/6371 refactor reduce arg to _Loss superclass | |
https://github.com/pytorch/pytorch/pull/6367 Fix activation images not showing up on official website | |
https://github.com/pytorch/pytorch/pull/6358 Fix Sphinx's incorrect rendering of arg type torch.dtype | |
https://github.com/pytorch/pytorch/pull/6341 Allow script_methods to be defined out of order | |
https://github.com/pytorch/pytorch/pull/6330 Add missing arguments to the GivenTensorFill schema | |
https://github.com/pytorch/pytorch/pull/6327 Add total_length option to pad_packed_sequence | |
https://github.com/pytorch/pytorch/pull/6314 Move instruction set specific code to anonymous namespace | |
https://github.com/pytorch/pytorch/pull/6307 Implement torch.einsum (fixes #1889) | |
https://github.com/pytorch/pytorch/pull/6293 Remove eigen impl for arg_max and arg_min | |
https://github.com/pytorch/pytorch/pull/6289 Add default args to loss functions in native_functions.yaml | |
https://github.com/pytorch/pytorch/pull/6283 Add string-style devices to all tensors. | |
https://github.com/pytorch/pytorch/pull/6281 Use reshape({-1}) | |
https://github.com/pytorch/pytorch/pull/6274 Add a CODEOWNERS file | |
https://github.com/pytorch/pytorch/pull/6272 [ready] Implement log2 and log10 in PyTorch | |
https://github.com/pytorch/pytorch/pull/6254 Change Python Arg Parser to only read default params if they are assigned | |
https://github.com/pytorch/pytorch/pull/6252 fix assertion error when input size smaller than number of module_copies | |
https://github.com/pytorch/pytorch/pull/6251 Fix Tensor.__setstate__ for legacy Tensor state | |
https://github.com/pytorch/pytorch/pull/6250 Remove unnecessary properties from Layout. | |
https://github.com/pytorch/pytorch/pull/6249 Add arg checks in torch.utils.data.Sampler classes | |
https://github.com/pytorch/pytorch/pull/6244 Fix SGD lr check failing on default value | |
https://github.com/pytorch/pytorch/pull/6242 Fix potential UB when input is empty | |
https://github.com/pytorch/pytorch/pull/6238 Print the diff files to aid in debugging when it's wrong. | |
https://github.com/pytorch/pytorch/pull/6236 Remove unused variable in Layout.cpp. | |
https://github.com/pytorch/pytorch/pull/6232 Detect re-initialization of _C shared library | |
https://github.com/pytorch/pytorch/pull/6230 Fix memory leak in maxpool3d backwards | |
https://github.com/pytorch/pytorch/pull/6229 Fix sharing of empty tensor in multiprocessing | |
https://github.com/pytorch/pytorch/pull/6221 Fix AvgPool breaking changes | |
https://github.com/pytorch/pytorch/pull/6211 Fix sparse embedding backward when input contains only padding_idx | |
https://github.com/pytorch/pytorch/pull/6207 Fix argument checking for inlining a module | |
https://github.com/pytorch/pytorch/pull/6199 use peephole 'optimization' to export default hidden size with correct semantics | |
https://github.com/pytorch/pytorch/pull/6192 avx_mathfun.h is imprecise | |
https://github.com/pytorch/pytorch/pull/6185 Tell source users about TORCH_CUDA_ARCH_LIST | |
https://github.com/pytorch/pytorch/pull/6173 Update torch.nn.init and torch.nn.utils.clip_grad | |
https://github.com/pytorch/pytorch/pull/6159 Move helper scripts to new repo | |
https://github.com/pytorch/pytorch/pull/6158 Add dtype arg to torch.*_window; Add dtype.is_floating_point | |
https://github.com/pytorch/pytorch/pull/6151 Delete NNPACK | |
https://github.com/pytorch/pytorch/pull/6146 Fix logic inside insertInput | |
https://github.com/pytorch/pytorch/pull/6145 Introduce torch.layout and split layout from dtypes. | |
https://github.com/pytorch/pytorch/pull/6136 [WIP] randint function | |
https://github.com/pytorch/pytorch/pull/6128 Make precision matrix computation in mvn stable | |
https://github.com/pytorch/pytorch/pull/6118 Fix fft when any of the input dimensions is not aligned | |
https://github.com/pytorch/pytorch/pull/6114 Avoid generating torch.*_backward_(input|weight|bias) | |
https://github.com/pytorch/pytorch/pull/6113 Support returning dictionaries in DataParallel | |
https://github.com/pytorch/pytorch/pull/6110 Fix bilinear performance regression | |
https://github.com/pytorch/pytorch/pull/6108 Set dataloader.batch_size = None when batch_sampler is given | |
https://github.com/pytorch/pytorch/pull/6093 Add underscore to nn.init.* and deprecate the original ones | |
https://github.com/pytorch/pytorch/pull/6089 Update FFT comments | |
https://github.com/pytorch/pytorch/pull/6088 Unify handling of type_dispatched_args in gen_python_functions. | |
https://github.com/pytorch/pytorch/pull/6086 Add class-specific error when key mismatch in load_state_dict | |
https://github.com/pytorch/pytorch/pull/6081 Remove dtypes from legacy tensor.new(...) | |
https://github.com/pytorch/pytorch/pull/6078 Exp, log, sin, cos vectorized | |
https://github.com/pytorch/pytorch/pull/6076 Correct argument misspelling. | |
https://github.com/pytorch/pytorch/pull/6072 NLLLoss: error message for mismatched input/target batch sizes | |
https://github.com/pytorch/pytorch/pull/6070 Relax constraints on return statements in the script | |
https://github.com/pytorch/pytorch/pull/6069 Fix printing of unknown binop operator in torchscript | |
https://github.com/pytorch/pytorch/pull/6062 Enable MKLDNN convolution forward and backward | |
https://github.com/pytorch/pytorch/pull/6059 Add source location information to error messages | |
https://github.com/pytorch/pytorch/pull/6058 Create safe and unsafe versions of sparse_coo_tensor | |
https://github.com/pytorch/pytorch/pull/6043 Update cpuinfo to d0222b4 | |
https://github.com/pytorch/pytorch/pull/6038 Enable TensorDataset to get any number of tensors | |
https://github.com/pytorch/pytorch/pull/6037 Fix use-after-free bug in peephole pass | |
https://github.com/pytorch/pytorch/pull/6033 Add additional script module functionality | |
https://github.com/pytorch/pytorch/pull/6031 Block set from param_group['params'] | |
https://github.com/pytorch/pytorch/pull/6026 Speed up sum over a dimension | |
https://github.com/pytorch/pytorch/pull/6025 Reorganize third-party libraries into top-level third_party directory | |
https://github.com/pytorch/pytorch/pull/6023 Fix instance norm | |
https://github.com/pytorch/pytorch/pull/6022 mkl-include is not installable if your conda is too old. | |
https://github.com/pytorch/pytorch/pull/6017 Update FAQ to make more sense after tensor/variable merge | |
https://github.com/pytorch/pytorch/pull/6005 Extra comment about backward vs. grad in engine. | |
https://github.com/pytorch/pytorch/pull/6004 Move .jenkins to .jenkins/pytorch | |
https://github.com/pytorch/pytorch/pull/6000 Added parameter range checks for all optimizers | |
https://github.com/pytorch/pytorch/pull/5997 Add numpy.array-like type inference to torch.tensor. | |
https://github.com/pytorch/pytorch/pull/5992 fix bias size assert | |
https://github.com/pytorch/pytorch/pull/5991 add mkl dependencies to setup | |
https://github.com/pytorch/pytorch/pull/5984 Add pip mkl-devel to the error message about mkl header files | |
https://github.com/pytorch/pytorch/pull/5980 Support batch LowerCholeskyTransform | |
https://github.com/pytorch/pytorch/pull/5971 Fix crash when cat-ing empty cuda tensors | |
https://github.com/pytorch/pytorch/pull/5968 Group Normalization | |
https://github.com/pytorch/pytorch/pull/5965 Remove pragma once from cpp file | |
https://github.com/pytorch/pytorch/pull/5955 Recommend citation | |
https://github.com/pytorch/pytorch/pull/5951 Store perf numbers in S3 | |
https://github.com/pytorch/pytorch/pull/5945 Fix tensor.permute(dims) backward for negative dims | |
https://github.com/pytorch/pytorch/pull/5936 Add support for printing extra information in Module and refactor redundant codes | |
https://github.com/pytorch/pytorch/pull/5934 Fix index out of range error when view a scalar as 1-dim tensor | |
https://github.com/pytorch/pytorch/pull/5928 Remove consumed_input | |
https://github.com/pytorch/pytorch/pull/5927 Linearly interpolating upsampling fix | |
https://github.com/pytorch/pytorch/pull/5926 parallel_for_2d fix and guarding avx/avx2 compilation | |
https://github.com/pytorch/pytorch/pull/5919 add mpi support for DDP | |
https://github.com/pytorch/pytorch/pull/5914 Fix linking issue in libtorch under macOS | |
https://github.com/pytorch/pytorch/pull/5913 Optimize unique sorting by using std::vector+sort instead of std::set | |
https://github.com/pytorch/pytorch/pull/5909 Revert "Fix ImportError with requests in model_zoo" | |
https://github.com/pytorch/pytorch/pull/5906 Fix integer overflow in remainder operator | |
https://github.com/pytorch/pytorch/pull/5896 Fix ImportError with requests in model_zoo | |
https://github.com/pytorch/pytorch/pull/5894 Better error msg for missing mkl headers | |
https://github.com/pytorch/pytorch/pull/5893 Fix softmax symbolic | |
https://github.com/pytorch/pytorch/pull/5892 Revert "Enable resetting of batchnorm running stats and cumulative ("simple") moving average" | |
https://github.com/pytorch/pytorch/pull/5890 Add support for subscripts in Python frontend | |
https://github.com/pytorch/pytorch/pull/5889 Support legacy empty tensor behavior in cat | |
https://github.com/pytorch/pytorch/pull/5880 Don't modify requires_grad when running DataParallel in no_grad mode | |
https://github.com/pytorch/pytorch/pull/5866 Delete stubs from one more place. | |
https://github.com/pytorch/pytorch/pull/5856 [fft][2 of 3] Forward for fft methods | |
https://github.com/pytorch/pytorch/pull/5850 Fix crash in new cuda tensor with numpy array | |
https://github.com/pytorch/pytorch/pull/5846 Softmax symbolic should account for negative dim | |
https://github.com/pytorch/pytorch/pull/5843 Add support for number and list literals in Python frontend | |
https://github.com/pytorch/pytorch/pull/5840 Fix nvprof parsing | |
https://github.com/pytorch/pytorch/pull/5838 Add operator[](int64_t) overload | |
https://github.com/pytorch/pytorch/pull/5829 fix detach in place error in DDP | |
https://github.com/pytorch/pytorch/pull/5827 Implement range for loop in script | |
https://github.com/pytorch/pytorch/pull/5824 introduce shape_as_tensor and reshape_from_tensor_shape | |
https://github.com/pytorch/pytorch/pull/5822 Make static state function-local | |
https://github.com/pytorch/pytorch/pull/5820 Namespaced symbols | |
https://github.com/pytorch/pytorch/pull/5819 Fix error message for cat-ing zero-dim tensors | |
https://github.com/pytorch/pytorch/pull/5818 Revert "introduce size_as_tensor and resize_from_tensor" | |
https://github.com/pytorch/pytorch/pull/5817 Cleaner solution to the undefined references in RPC | |
https://github.com/pytorch/pytorch/pull/5815 Fix convolution type mismatch error message | |
https://github.com/pytorch/pytorch/pull/5814 Fix kldiv backward on CUDA | |
https://github.com/pytorch/pytorch/pull/5794 Define RPC types out of source | |
https://github.com/pytorch/pytorch/pull/5792 introduce size_as_tensor and resize_from_tensor | |
https://github.com/pytorch/pytorch/pull/5786 Add symbolic functions for cumsum and embedding_bag | |
https://github.com/pytorch/pytorch/pull/5785 Make the tensor type torch.Tensor instead of torch.autograd.Variable | |
https://github.com/pytorch/pytorch/pull/5782 fused GLU backward | |
https://github.com/pytorch/pytorch/pull/5781 [REDO] Add torch.sparse_coo_tensor factory. | |
https://github.com/pytorch/pytorch/pull/5780 Revert "Add torch.sparse_coo_tensor factory." | |
https://github.com/pytorch/pytorch/pull/5774 improve handling of precision issue in torch.multinomial (solves #4858) | |
https://github.com/pytorch/pytorch/pull/5773 use std:: math functions | |
https://github.com/pytorch/pytorch/pull/5766 Enable resetting of batchnorm running stats and cumulative ("simple") moving average | |
https://github.com/pytorch/pytorch/pull/5764 Support N-D tensors in Bilinear | |
https://github.com/pytorch/pytorch/pull/5756 Fixes Variable::data() on UndefinedTensor | |
https://github.com/pytorch/pytorch/pull/5755 Put torch header install back into the install command | |
https://github.com/pytorch/pytorch/pull/5749 Allow indexing by scalars and zero-dim tensors | |
https://github.com/pytorch/pytorch/pull/5747 Save self.numel() for backward computation instead of self | |
https://github.com/pytorch/pytorch/pull/5745 Add torch.sparse_coo_tensor factory. | |
https://github.com/pytorch/pytorch/pull/5744 Fix bmm memory leak | |
https://github.com/pytorch/pytorch/pull/5743 Clean up TraceInput | |
https://github.com/pytorch/pytorch/pull/5726 Attempt to fix #5718. | |
https://github.com/pytorch/pytorch/pull/5723 tbb set_num_threads | |
https://github.com/pytorch/pytorch/pull/5722 Add optimization to norm for common norms | |
https://github.com/pytorch/pytorch/pull/5717 Fix at::optional return type in fusibleExpandTo | |
https://github.com/pytorch/pytorch/pull/5713 Ensure torch.tensor and Tensor.new_tensor copy numpy data. | |
https://github.com/pytorch/pytorch/pull/5710 improve occupancy for cuda rngs | |
https://github.com/pytorch/pytorch/pull/5707 Delete ""_sym literal form. | |
https://github.com/pytorch/pytorch/pull/5701 Fix error message in nn.functional.convNd and nn.functional.conv_transposeNd | |
https://github.com/pytorch/pytorch/pull/5682 Fix floor latex rendering | |
https://github.com/pytorch/pytorch/pull/5680 implement TripletMarginLoss as a native function | |
https://github.com/pytorch/pytorch/pull/5674 Only allow dense floating-point types as the default tensor type. | |
https://github.com/pytorch/pytorch/pull/5669 Add device to Tensor.new_tensor. | |
https://github.com/pytorch/pytorch/pull/5668 Add torch.empty, torch.full and new_ size Tensor factory methods. | |
https://github.com/pytorch/pytorch/pull/5659 make dimension checker of `scatter_add_` consistent with `scatter_` | |
https://github.com/pytorch/pytorch/pull/5657 Check value type for register_buffer | |
https://github.com/pytorch/pytorch/pull/5655 Add gpu guard for broadcast_coalesce | |
https://github.com/pytorch/pytorch/pull/5646 implement CosineEmbeddingLoss as a native function and add reduce arg | |
https://github.com/pytorch/pytorch/pull/5645 Non var | |
https://github.com/pytorch/pytorch/pull/5644 Fix CUDA btrifact error message using wrong info type | |
https://github.com/pytorch/pytorch/pull/5643 Add additional deprecated overloads with out kwarg | |
https://github.com/pytorch/pytorch/pull/5642 Unify error checking for tensor.index_copy_ | |
https://github.com/pytorch/pytorch/pull/5640 Revert "implement CosineEmbeddingLoss as a native function and add reduce arg" | |
https://github.com/pytorch/pytorch/pull/5629 Traceable dispatch for cast methods | |
https://github.com/pytorch/pytorch/pull/5622 Alias torch.diagonal, torch.diagflat | |
https://github.com/pytorch/pytorch/pull/5621 Fix compilation with CUDA < 8.0 | |
https://github.com/pytorch/pytorch/pull/5619 Prefix DataLoaderIter with underscore to discourage subclassing | |
https://github.com/pytorch/pytorch/pull/5614 replace ExportProxy | |
https://github.com/pytorch/pytorch/pull/5600 Make torch.arange consistent with numpy.arange | |
https://github.com/pytorch/pytorch/pull/5596 remove legacy workaround for hinge embedding loss reference fn | |
https://github.com/pytorch/pytorch/pull/5595 allow application of @symbolic decorators without circular imports | |
https://github.com/pytorch/pytorch/pull/5594 release() does not need to be virtual | |
https://github.com/pytorch/pytorch/pull/5593 Fix for a confusion around grammar of Maybe | |
https://github.com/pytorch/pytorch/pull/5583 Allow indexing tensors with both CPU and CUDA tensors | |
https://github.com/pytorch/pytorch/pull/5582 Use operator.index to convert indices to Python int | |
https://github.com/pytorch/pytorch/pull/5581 Fix Variable conversion on the way to/from Python | |
https://github.com/pytorch/pytorch/pull/5576 Support native namespace functions with type dispatch. | |
https://github.com/pytorch/pytorch/pull/5575 Implement torch.reshape and Tensor.reshape | |
https://github.com/pytorch/pytorch/pull/5574 Defer shape analysis failures until runtime | |
https://github.com/pytorch/pytorch/pull/5558 Missed Step | |
https://github.com/pytorch/pytorch/pull/5555 Add set_grad_enabled as context manager and function | |
https://github.com/pytorch/pytorch/pull/5540 add: padding_value to `torch.nn.utils.rnn.pad_sequence` | |
https://github.com/pytorch/pytorch/pull/5530 Add at::optional from https://github.com/akrzemi1/Optional | |
https://github.com/pytorch/pytorch/pull/5526 Implement pow() for integer types | |
https://github.com/pytorch/pytorch/pull/5525 Expunge all occurrences of torch._C._VariableFunctions | |
https://github.com/pytorch/pytorch/pull/5516 Add virtual destructor to SourceLocation | |
https://github.com/pytorch/pytorch/pull/5512 CharTensor should be signed | |
https://github.com/pytorch/pytorch/pull/5508 Replace all uses of 'Tensor or Variable' with 'Tensor' | |
https://github.com/pytorch/pytorch/pull/5505 Some additional clean-ups | |
https://github.com/pytorch/pytorch/pull/5503 Add per-element unique op for CPU | |
https://github.com/pytorch/pytorch/pull/5501 Set default amsgrad param in adam optimizer | |
https://github.com/pytorch/pytorch/pull/5500 Delete unused files | |
https://github.com/pytorch/pytorch/pull/5498 Fix naming issue in TensorCompare.cpp | |
https://github.com/pytorch/pytorch/pull/5493 Lower max jobs osx | |
https://github.com/pytorch/pytorch/pull/5488 Recompute captures after the parameter is updated | |
https://github.com/pytorch/pytorch/pull/5483 Enable additional tensor types in Gloo backend | |
https://github.com/pytorch/pytorch/pull/5482 install pytorch into default conda env | |
https://github.com/pytorch/pytorch/pull/5476 Deprecate variable factory, use torch.tensor instead | |
https://github.com/pytorch/pytorch/pull/5473 Remove some uses of torch.is_tensor in favor of isinstance | |
https://github.com/pytorch/pytorch/pull/5471 Reorganize interned strings into categories. | |
https://github.com/pytorch/pytorch/pull/5467 Check that parsed_args contains enough space for all parameters | |
https://github.com/pytorch/pytorch/pull/5466 torch.load() / torch.save() support arbitrary file-like object | |
https://github.com/pytorch/pytorch/pull/5465 Use 'Tensor' instead of 'Variable' in type error messages | |
https://github.com/pytorch/pytorch/pull/5464 More Variable/Tensor clean-ups | |
https://github.com/pytorch/pytorch/pull/5453 Added 3d grid sampler (for volumetric transformer networks) | |
https://github.com/pytorch/pytorch/pull/5447 implement CosineEmbeddingLoss as a native function and add reduce arg | |
https://github.com/pytorch/pytorch/pull/5444 Add dtype to torch.Tensor constructors and accept them in set_default_tensor_type | |
https://github.com/pytorch/pytorch/pull/5441 Support type conversion via type(dtype). | |
https://github.com/pytorch/pytorch/pull/5433 speed up CPU EmbeddingBag (indexSelectAdd op) | |
https://github.com/pytorch/pytorch/pull/5423 add dependency for OS X | |
https://github.com/pytorch/pytorch/pull/5419 Introduce torch.tensor (was torch.autograd.variable). | |
https://github.com/pytorch/pytorch/pull/5418 Avoid extra cpu->cpu copy in dispatch_type. | |
https://github.com/pytorch/pytorch/pull/5415 Set python random seed in workers | |
https://github.com/pytorch/pytorch/pull/5414 Empty sparse tensor copy reverses dimI, dimV. | |
https://github.com/pytorch/pytorch/pull/5413 Remove two uses of the old Tensor class | |
https://github.com/pytorch/pytorch/pull/5408 Expose gradients w.r.t. input & weight for conv1d, conv2d, conv3d in Python | |
https://github.com/pytorch/pytorch/pull/5393 [ready] Add logdet and slogdet | |
https://github.com/pytorch/pytorch/pull/5392 Mark functions that shouldn't end up in torch. as method-only. | |
https://github.com/pytorch/pytorch/pull/5390 Don't python bind 'tensor' or 'sparse_coo_tensor'. | |
https://github.com/pytorch/pytorch/pull/5384 Add support for device python arguments with constructors. | |
https://github.com/pytorch/pytorch/pull/5380 Ignore FileNotFoundError when shutting down in data_queue.get | |
https://github.com/pytorch/pytorch/pull/5378 Add faq on cuda memory management and dataloder worker seeds | |
https://github.com/pytorch/pytorch/pull/5371 Fix undefined refence to convolve_5x5_sse on SSE4.1 CPUs | |
https://github.com/pytorch/pytorch/pull/5366 Fix wrong argument name | |
https://github.com/pytorch/pytorch/pull/5361 Handle copying empty sparse tensors to/from CPU, GPU. | |
https://github.com/pytorch/pytorch/pull/5352 Add a disabled-configs.txt interlock. | |
https://github.com/pytorch/pytorch/pull/5350 Embedding.from_pretrained factory | |
https://github.com/pytorch/pytorch/pull/5348 Added torch.distributed.launch module for easier multi-proc/node distributed job launching | |
https://github.com/pytorch/pytorch/pull/5346 Implement MarginRankingLoss as native function and add reduce=True arg to it | |
https://github.com/pytorch/pytorch/pull/5343 Support dtypes in legacy new constructors. | |
https://github.com/pytorch/pytorch/pull/5335 Improve sparse variable printing. | |
https://github.com/pytorch/pytorch/pull/5334 Fix the bug of only processing one attribute | |
https://github.com/pytorch/pytorch/pull/5328 implement double backwards for MaxPool3d | |
https://github.com/pytorch/pytorch/pull/5324 Improve CUDA extension support | |
https://github.com/pytorch/pytorch/pull/5321 Various dtype improvements. | |
https://github.com/pytorch/pytorch/pull/5320 Make _like dtype arguments keyword only. | |
https://github.com/pytorch/pytorch/pull/5318 Remove _out variants of like functions. | |
https://github.com/pytorch/pytorch/pull/5317 add guards when source of container cannot be retreived | |
https://github.com/pytorch/pytorch/pull/5312 Change output_declarations in function_wrapper.py to be a NamedTuple | |
https://github.com/pytorch/pytorch/pull/5300 Make ReduceLROnPlateau serializable. | |
https://github.com/pytorch/pytorch/pull/5299 Raise an error if target is out-of-bounds in ClassNLLCriterion | |
https://github.com/pytorch/pytorch/pull/5294 Configurable flushing denormal numbers on CPU | |
https://github.com/pytorch/pytorch/pull/5293 add control flow to interpreter | |
https://github.com/pytorch/pytorch/pull/5279 Speed-up nn.Linear for the 3d input case | |
https://github.com/pytorch/pytorch/pull/5277 Use TORCH_EXTENSION_NAME macro to avoid mismatched module/extension name | |
https://github.com/pytorch/pytorch/pull/5276 Fix __syncthread in SpatialClassNLLCriterion.cu | |
https://github.com/pytorch/pytorch/pull/5275 Fixes UB when using legacy python functions and mark_non_differentiable | |
https://github.com/pytorch/pytorch/pull/5273 Implement torch.isnan | |
https://github.com/pytorch/pytorch/pull/5255 check attribute existence in torch.legay.nn.SpatialFullConvolution | |
https://github.com/pytorch/pytorch/pull/5251 Add a FAQ, for now just 'out of memory' advice. | |
https://github.com/pytorch/pytorch/pull/5250 Revert "Remove unnecessary __syncthreads before reduceBlock" | |
https://github.com/pytorch/pytorch/pull/5246 Fix call to assertNotEqual | |
https://github.com/pytorch/pytorch/pull/5245 Add numpy-style dtypes to Variable factories. | |
https://github.com/pytorch/pytorch/pull/5242 Remove unnecessary __syncthreads before reduceBlock | |
https://github.com/pytorch/pytorch/pull/5238 CUDA multinomial fix | |
https://github.com/pytorch/pytorch/pull/5233 Include __delitem__ for Sequential | |
https://github.com/pytorch/pytorch/pull/5230 Check GCC version on Ubuntu | |
https://github.com/pytorch/pytorch/pull/5228 Fix for PRId64 | |
https://github.com/pytorch/pytorch/pull/5225 Merge Variable and Tensor classes | |
https://github.com/pytorch/pytorch/pull/5221 Improve Function interface | |
https://github.com/pytorch/pytorch/pull/5216 Implement torch.util.bottleneck | |
https://github.com/pytorch/pytorch/pull/5215 Fix GraphExecutor and add more AD formulas | |
https://github.com/pytorch/pytorch/pull/5204 Implement symbolic for slice operation | |
https://github.com/pytorch/pytorch/pull/5203 Additional sparse Variable fixes | |
https://github.com/pytorch/pytorch/pull/5202 Modest refactor of .jenkins scripts | |
https://github.com/pytorch/pytorch/pull/5196 Add missing async deprecated wrapper to tools/autograd/templates/pyth… | |
https://github.com/pytorch/pytorch/pull/5194 CUDA 9 | |
https://github.com/pytorch/pytorch/pull/5184 Make Python autograd functions respect grad mode | |
https://github.com/pytorch/pytorch/pull/5165 make explicit about keyword-onlyness of `out` | |
https://github.com/pytorch/pytorch/pull/5162 Replace NULL with nullptr in autograd | |
https://github.com/pytorch/pytorch/pull/5160 Dropout | |
https://github.com/pytorch/pytorch/pull/5158 Enable scalars. | |
https://github.com/pytorch/pytorch/pull/5150 add reduce=True arg to MultiMarginLoss | |
https://github.com/pytorch/pytorch/pull/5144 Add a new_tensor instance method to Variable that takes only data. | |
https://github.com/pytorch/pytorch/pull/5143 Adjust stft result comparison precision to 7e-6 | |
https://github.com/pytorch/pytorch/pull/5142 Allow zero-dim tensors to be bound to at::Scalar | |
https://github.com/pytorch/pytorch/pull/5130 add reduce=True arg to HingeEmbeddingLoss | |
https://github.com/pytorch/pytorch/pull/5128 Fix ffi cdata for Variables. | |
https://github.com/pytorch/pytorch/pull/5127 Improve Variable interface | |
https://github.com/pytorch/pytorch/pull/5125 warn that CUDA capability 3.0 and 5.0 is no longer supported | |
https://github.com/pytorch/pytorch/pull/5119 Print Parameters like Variables (i.e. print scalars correctly). | |
https://github.com/pytorch/pytorch/pull/5117 Implement Variable.new(...) overloads for sparse tensors | |
https://github.com/pytorch/pytorch/pull/5115 Ensure Tensors have storages in resizeNd | |
https://github.com/pytorch/pytorch/pull/5114 Allow and warn when indexing a zero-dim Variable | |
https://github.com/pytorch/pytorch/pull/5113 Support calling pack_padedd_sequence with a Variable lengths | |
https://github.com/pytorch/pytorch/pull/5097 add reduce=True arg to MultiLabelSoftMarginLoss | |
https://github.com/pytorch/pytorch/pull/5093 Fix CPU torch.multinomial with noncontiguous prob tensor | |
https://github.com/pytorch/pytorch/pull/5090 Add Variable.item() | |
https://github.com/pytorch/pytorch/pull/5089 Check that indices and values are on the same device | |
https://github.com/pytorch/pytorch/pull/5088 Add deprecated add_out overload | |
https://github.com/pytorch/pytorch/pull/5085 Check shape instead of number of elements for some losses | |
https://github.com/pytorch/pytorch/pull/5080 Implement hinge_embedding_loss as a native function. | |
https://github.com/pytorch/pytorch/pull/5077 Restore torch.mm behavior for sparse variables | |
https://github.com/pytorch/pytorch/pull/5071 add reduce=True arg to SoftMarginLoss | |
https://github.com/pytorch/pytorch/pull/5064 DDP: 10% of NCCL backend perf improvements with mixed-prec support | |
https://github.com/pytorch/pytorch/pull/5054 Use fast integer division algorithm to avoid division ops inside kernels. | |
https://github.com/pytorch/pytorch/pull/5036 Use blocks machinery to simplify bookkeeping in autodiff | |
https://github.com/pytorch/pytorch/pull/5035 Bring back Tensor::data<__half>() and remove Tensor::data() template | |
https://github.com/pytorch/pytorch/pull/5030 Replace edge_type with Edge and create Variable::gradient_edge() | |
https://github.com/pytorch/pytorch/pull/5019 Add .clang-format | |
https://github.com/pytorch/pytorch/pull/5018 Remove FunctionFlags | |
https://github.com/pytorch/pytorch/pull/5017 Expose sparse variable sspaddmm | |
https://github.com/pytorch/pytorch/pull/5016 Expose sparse variable addmm, addmm_ | |
https://github.com/pytorch/pytorch/pull/5010 add AVX2 implementation for sigmoid function | |
https://github.com/pytorch/pytorch/pull/5003 Don't allow scalars where vectors are required in mv, addmv, ger, addr. | |
https://github.com/pytorch/pytorch/pull/5000 Reverts force_gpu_half changes from #3660 | |
https://github.com/pytorch/pytorch/pull/4999 Replace async with non_blocking for Python 3.7 | |
https://github.com/pytorch/pytorch/pull/4992 Make cat/cat_out native function that rejects scalar inputs. | |
https://github.com/pytorch/pytorch/pull/4991 Only check that arguments are Variables in VariableType | |
https://github.com/pytorch/pytorch/pull/4982 Initial GraphExecutor Implementation. | |
https://github.com/pytorch/pytorch/pull/4980 Revert "Only check that arguments are Variables in VariableType (#4943)" | |
https://github.com/pytorch/pytorch/pull/4978 DDP: coalescing many little broadcasts to improve performance | |
https://github.com/pytorch/pytorch/pull/4977 Support stack_out as a native function. | |
https://github.com/pytorch/pytorch/pull/4975 Remove volatile section from autograd notes | |
https://github.com/pytorch/pytorch/pull/4972 Don't allow scalars in torch.dot for Variables. | |
https://github.com/pytorch/pytorch/pull/4967 Revert "torch.set_num_threads sets MKL option too" | |
https://github.com/pytorch/pytorch/pull/4966 Use TypeError in PythonArgParser | |
https://github.com/pytorch/pytorch/pull/4964 Operate on Variables in torch.nn.init | |
https://github.com/pytorch/pytorch/pull/4951 Properly fill in make_non_contiguous data for sizes that can't be mad… | |
https://github.com/pytorch/pytorch/pull/4949 torch.set_num_threads sets MKL option too | |
https://github.com/pytorch/pytorch/pull/4947 Initial type hints for function_wrapper | |
https://github.com/pytorch/pytorch/pull/4943 Only check that arguments are Variables in VariableType | |
https://github.com/pytorch/pytorch/pull/4933 fix copy/paste error in debug message in rnn.py | |
https://github.com/pytorch/pytorch/pull/4931 Add assignment support for Sequential | |
https://github.com/pytorch/pytorch/pull/4924 add reduce=True argument to MultiLabelMarginLoss | |
https://github.com/pytorch/pytorch/pull/4922 [ready] Layer Normalization | |
https://github.com/pytorch/pytorch/pull/4921 Release NCCL distributed backend from experimental | |
https://github.com/pytorch/pytorch/pull/4919 Don't use Variable vs. Tensor type-checks for requires_grad logic | |
https://github.com/pytorch/pytorch/pull/4911 Fixes to native_functions.yaml to match existing Tensor behavior | |
https://github.com/pytorch/pytorch/pull/4910 Add Linux Jenkins scripts to PyTorch repo. | |
https://github.com/pytorch/pytorch/pull/4909 Fix condition in inferUnsqueezeGeometry | |
https://github.com/pytorch/pytorch/pull/4892 Fix visibility of AT_CUDA_ENABLED | |
https://github.com/pytorch/pytorch/pull/4891 Added mixed-precision support in distributed training | |
https://github.com/pytorch/pytorch/pull/4889 Fix some scalar issues with autograd. | |
https://github.com/pytorch/pytorch/pull/4886 Deprecate out-of-place resize and resize_as on Variables. | |
https://github.com/pytorch/pytorch/pull/4883 Fix torch.pstrf on Variables | |
https://github.com/pytorch/pytorch/pull/4882 Implement sparse tensor and variable norm(value) | |
https://github.com/pytorch/pytorch/pull/4878 Make TensorDescriptor call more portable | |
https://github.com/pytorch/pytorch/pull/4876 Addition of ExponentialFamily | |
https://github.com/pytorch/pytorch/pull/4873 Added Poisson self KL + Bernoulli/Poisson KL | |
https://github.com/pytorch/pytorch/pull/4870 Slightly improve DistributedDataParallel (single-GPU binding) multi-process distributed training performance | |
https://github.com/pytorch/pytorch/pull/4861 Cuda 9.1 is cuda version 9010 not 9100 | |
https://github.com/pytorch/pytorch/pull/4854 Fix deepcopy with scalars. | |
https://github.com/pytorch/pytorch/pull/4853 Various indexing fixes around scalars. | |
https://github.com/pytorch/pytorch/pull/4847 fix binary version scheme to be PEP compliant | |
https://github.com/pytorch/pytorch/pull/4826 Removed redundant import re | |
https://github.com/pytorch/pytorch/pull/4824 parallelize vol2col and col2vol of Conv3D with CPU backend | |
https://github.com/pytorch/pytorch/pull/4812 Fix output_nr not incremented correctly | |
https://github.com/pytorch/pytorch/pull/4811 Update pybind11 | |
https://github.com/pytorch/pytorch/pull/4810 Create issue template with guidelines for issue submissions | |
https://github.com/pytorch/pytorch/pull/4807 Fix #4480 by tracing inputs before running function. | |
https://github.com/pytorch/pytorch/pull/4803 More efficient squeeze() backward in edge case | |
https://github.com/pytorch/pytorch/pull/4799 Add symbolic_override_first_arg_based | |
https://github.com/pytorch/pytorch/pull/4795 Enabling Infiniband support for Gloo data channel with auto IB detection | |
https://github.com/pytorch/pytorch/pull/4791 Favor Variables over Tensors for scalar constructors in torch.distrib… | |
https://github.com/pytorch/pytorch/pull/4788 Initialize cuda before setting cuda tensor types as default | |
https://github.com/pytorch/pytorch/pull/4787 Restore cuda variable.bernoulli() | |
https://github.com/pytorch/pytorch/pull/4786 Use Variable instead of Tensor in Function.forward | |
https://github.com/pytorch/pytorch/pull/4785 Restore sparse variable _dimI() and _dimV() | |
https://github.com/pytorch/pytorch/pull/4783 Fix squeeze() backward in edge case | |
https://github.com/pytorch/pytorch/pull/4780 Restore more sparse variable methods | |
https://github.com/pytorch/pytorch/pull/4779 Restore sparse variable transpose_() and t_() | |
https://github.com/pytorch/pytorch/pull/4775 New index computation strategy in Functions.cpp (Tensor/TensorList) | |
https://github.com/pytorch/pytorch/pull/4772 Use variadic templates instead of initializer lists and overloads (ROUND 2) | |
https://github.com/pytorch/pytorch/pull/4771 Implement Transforms | |
https://github.com/pytorch/pytorch/pull/4766 Removing NCCL clear_group_cache workaround with one more check in new_group | |
https://github.com/pytorch/pytorch/pull/4760 Temporary fix for Embedding fp16 | |
https://github.com/pytorch/pytorch/pull/4754 Allow assertEqual checks with mixed Tensors, Variables, numbers. | |
https://github.com/pytorch/pytorch/pull/4753 Implement a (data-only) Variable factory | |
https://github.com/pytorch/pytorch/pull/4748 Add kwarg-only 'requires_grad' parameter to Variable factories. | |
https://github.com/pytorch/pytorch/pull/4746 Heuristic-based autograd execution order | |
https://github.com/pytorch/pytorch/pull/4745 Fix resize_as_ on Variables containing SparseTensors | |
https://github.com/pytorch/pytorch/pull/4744 Ensure that Tensors always have Storages | |
https://github.com/pytorch/pytorch/pull/4735 Legacy Padding: correct output size with nInputDim | |
https://github.com/pytorch/pytorch/pull/4734 Fused fixes | |
https://github.com/pytorch/pytorch/pull/4730 Allow Python Variables to be bound to at::Tensor in pybind11 converter | |
https://github.com/pytorch/pytorch/pull/4728 Implement record_stream on Variable | |
https://github.com/pytorch/pytorch/pull/4725 Allow Variables in calls to NCCL bindings. | |
https://github.com/pytorch/pytorch/pull/4724 Allow Variables in calls to type2backend | |
https://github.com/pytorch/pytorch/pull/4721 Define CHECK in torch/csrc/cuda/nccl.h | |
https://github.com/pytorch/pytorch/pull/4717 Make Symbol a true struct | |
https://github.com/pytorch/pytorch/pull/4712 Move repeat to torch/_utils.py | |
https://github.com/pytorch/pytorch/pull/4711 Replace PowConstant | |
https://github.com/pytorch/pytorch/pull/4707 Remove setting coalesce to 0 in sparse transpose_ | |
https://github.com/pytorch/pytorch/pull/4705 adds reduce argument to BCEWithLogitsLoss interface | |
https://github.com/pytorch/pytorch/pull/4696 Add proper scalar checks to functions bound by nn.yaml. | |
https://github.com/pytorch/pytorch/pull/4691 Ensure lazy evaluation for probs and logits | |
https://github.com/pytorch/pytorch/pull/4690 Improve the engine support for functional graph execution | |
https://github.com/pytorch/pytorch/pull/4687 Restores some sparse variable methods | |
https://github.com/pytorch/pytorch/pull/4686 Fix embedding with sparse=True | |
https://github.com/pytorch/pytorch/pull/4683 Add print support for sparse variables | |
https://github.com/pytorch/pytorch/pull/4667 Local Response Normalization | |
https://github.com/pytorch/pytorch/pull/4663 replace full stop by comma | |
https://github.com/pytorch/pytorch/pull/4654 NLLLoss: current code works with dim = 3, so I added it to dim checks | |
https://github.com/pytorch/pytorch/pull/4643 Dataloader issues | |
https://github.com/pytorch/pytorch/pull/4640 Fixed non-determinate preprocessing on DataLoader | |
https://github.com/pytorch/pytorch/pull/4618 Adding is process_group initialized support | |
https://github.com/pytorch/pytorch/pull/4617 Add missing torch declarations to derivatives.yaml. | |
https://github.com/pytorch/pytorch/pull/4615 Implement MM fusion (MM with add reduction tree) | |
https://github.com/pytorch/pytorch/pull/4614 Allow broadcasting of value x params in Categorical | |
https://github.com/pytorch/pytorch/pull/4611 Fused fp16 lstm backward math fix. | |
https://github.com/pytorch/pytorch/pull/4604 Fix errors in travis config | |
https://github.com/pytorch/pytorch/pull/4598 spelling fix | |
https://github.com/pytorch/pytorch/pull/4586 Introduce a (non-public) autograd scalar method and improve printing | |
https://github.com/pytorch/pytorch/pull/4566 Remove accumulate_grad version_counter check. | |
https://github.com/pytorch/pytorch/pull/4565 Bind functions with out= arguments in VariableType | |
https://github.com/pytorch/pytorch/pull/4558 Further fix to tracing scope | |
https://github.com/pytorch/pytorch/pull/4545 Fix a missing AutoGPU | |
https://github.com/pytorch/pytorch/pull/4543 Ensure convolution weights are contiguous, fixes #4500 | |
https://github.com/pytorch/pytorch/pull/4541 Link NNPACK even when CUDA is not available. | |
https://github.com/pytorch/pytorch/pull/4538 Fix torch.diag backward with non-square matrix | |
https://github.com/pytorch/pytorch/pull/4529 Fix return type for Bernoulli enumerate_support | |
https://github.com/pytorch/pytorch/pull/4527 Fix the inconsistency of `polygamma` on Tensor and Variable, for issue #4466 | |
https://github.com/pytorch/pytorch/pull/4521 Fix abs specialization for `uint8_t` type. | |
https://github.com/pytorch/pytorch/pull/4512 Implement backward pass for pack_padded_sequence | |
https://github.com/pytorch/pytorch/pull/4511 Methods for checking CUDA memory usage | |
https://github.com/pytorch/pytorch/pull/4504 Fix multi-gpu fuser bug | |
https://github.com/pytorch/pytorch/pull/4496 Padding_idx in Embedding supports negative indexing | |
https://github.com/pytorch/pytorch/pull/4491 Add Slicing capabilities for Sequential, ModuleList and ParameterList | |
https://github.com/pytorch/pytorch/pull/4487 Refactor gen_variable_type | |
https://github.com/pytorch/pytorch/pull/4486 Fix handling of empty indices in CUDA Tensor.put_ | |
https://github.com/pytorch/pytorch/pull/4473 Fix template type for std::array size | |
https://github.com/pytorch/pytorch/pull/4465 Rename native/TensorGeometry to native/TensorShape since there is alr… | |
https://github.com/pytorch/pytorch/pull/4462 Fix some scalar checks | |
https://github.com/pytorch/pytorch/pull/4460 Fix CUDA double backwards | |
https://github.com/pytorch/pytorch/pull/4448 Supporting logits as parameters in Bernoulli and Categorical | |
https://github.com/pytorch/pytorch/pull/4445 Remove assign_(Scalar). | |
https://github.com/pytorch/pytorch/pull/4430 [WIP] Added method cuda to PackedSequence. | |
https://github.com/pytorch/pytorch/pull/4421 Improve precision of dirichlet_grad() approximation | |
https://github.com/pytorch/pytorch/pull/4415 Modify derivatives for efficiency and change `destination` to `result` for consistency | |
https://github.com/pytorch/pytorch/pull/4414 fix distutils error for Unix case | |
https://github.com/pytorch/pytorch/pull/4409 RNN support has been implemented | |
https://github.com/pytorch/pytorch/pull/4401 Support NO_NNPACK environment variable | |
https://github.com/pytorch/pytorch/pull/4399 Add low-precision digamma() and polygamma() functions | |
https://github.com/pytorch/pytorch/pull/4398 Split cuda native functions into components; fix mistake with conv_tb… | |
https://github.com/pytorch/pytorch/pull/4395 Don't special case NN functions in gen_variable_type.py | |
https://github.com/pytorch/pytorch/pull/4393 Add Tensor::print() for gdb use. | |
https://github.com/pytorch/pytorch/pull/4389 Fix type signature of in-place NN functions | |
https://github.com/pytorch/pytorch/pull/4385 Update derivative of expm1 | |
https://github.com/pytorch/pytorch/pull/4383 removes duplicate variable reference crash from pad_sequences | |
https://github.com/pytorch/pytorch/pull/4371 Adding description for Optimizers | |
https://github.com/pytorch/pytorch/pull/4370 Split off load_derivatives and gen_autograd_functions from gen_variable_type | |
https://github.com/pytorch/pytorch/pull/4369 Improve precision of standard_gamma_grad() | |
https://github.com/pytorch/pytorch/pull/4367 Fix creating tensors with np.longlong array | |
https://github.com/pytorch/pytorch/pull/4366 VariableType clean-up | |
https://github.com/pytorch/pytorch/pull/4350 Adding torch.expm1() and its inplace function | |
https://github.com/pytorch/pytorch/pull/4339 fix NameError in torch/nn/modules/rnn.py | |
https://github.com/pytorch/pytorch/pull/4331 Make expect file directory search more robust. | |
https://github.com/pytorch/pytorch/pull/4326 Reorder native_functions.yaml by alphabetical order. | |
https://github.com/pytorch/pytorch/pull/4325 Split NativeFunctions.cpp into functional components. | |
https://github.com/pytorch/pytorch/pull/4318 Fix btrifact for variables | |
https://github.com/pytorch/pytorch/pull/4314 fix AMSGrad for SparseAdam | |
https://github.com/pytorch/pytorch/pull/4312 Vectorize normal_ | |
https://github.com/pytorch/pytorch/pull/4308 Make Variable.is_sparse an attribute | |
https://github.com/pytorch/pytorch/pull/4307 Fix default device for Variable.new() | |
https://github.com/pytorch/pytorch/pull/4306 Throw exception in checkBackend, improve standard_gamma_grad error me… | |
https://github.com/pytorch/pytorch/pull/4303 Add factory Type::sparse_coo_tensor(indices, values) | |
https://github.com/pytorch/pytorch/pull/4301 Use `where` rather than `_s_where` in `_s_where` backwards so `where`… | |
https://github.com/pytorch/pytorch/pull/4298 Enable functional torch.where. | |
https://github.com/pytorch/pytorch/pull/4267 Remove unused thnn/loss.py | |
https://github.com/pytorch/pytorch/pull/4262 Ensure gamma samples are positive | |
https://github.com/pytorch/pytorch/pull/4259 Implement torch.where(condition, x, y) CPU Variable. | |
https://github.com/pytorch/pytorch/pull/4255 Remove template_scalar, implement is_signed using dispatch. | |
https://github.com/pytorch/pytorch/pull/4249 Don't mark index as traceable, and other improvements | |
https://github.com/pytorch/pytorch/pull/4244 Further relax VariableFlags, ... and fix bugs | |
https://github.com/pytorch/pytorch/pull/4242 Translate None to zeros for old-style autograd functions | |
https://github.com/pytorch/pytorch/pull/4238 Deprecate nn.NLLLoss2d | |
https://github.com/pytorch/pytorch/pull/4231 Add reduce arg to BCELoss | |
https://github.com/pytorch/pytorch/pull/4221 Revert "Add reduce arg to BCELoss" | |
https://github.com/pytorch/pytorch/pull/4209 Make import work even if 'tools' is available in Python path | |
https://github.com/pytorch/pytorch/pull/4200 Expose node scopeName to python | |
https://github.com/pytorch/pytorch/pull/4191 Relax verify of VariableFlags | |
https://github.com/pytorch/pytorch/pull/4185 Fix another leak in pybind11 code. | |
https://github.com/pytorch/pytorch/pull/4184 Preprocess both inplace and non-inplace nn functions | |
https://github.com/pytorch/pytorch/pull/4183 Allowing usage of GPU Direct within PyTorch for the Broadcast operation | |
https://github.com/pytorch/pytorch/pull/4182 Fix a bug where from_dlpack fails if cuda is not initialized. | |
https://github.com/pytorch/pytorch/pull/4174 Rearrange dimensions for pointwise operations for better performance. | |
https://github.com/pytorch/pytorch/pull/4163 support RNN export | |
https://github.com/pytorch/pytorch/pull/4158 Re-initialize autograd engine in child processes | |
https://github.com/pytorch/pytorch/pull/4143 Improve symbolic hack a bit | |
https://github.com/pytorch/pytorch/pull/4142 add reparameterization, combine sample and sample_n | |
https://github.com/pytorch/pytorch/pull/4135 Add an option to suppress download progress | |
https://github.com/pytorch/pytorch/pull/4124 Implement Variable._sparse_mask | |
https://github.com/pytorch/pytorch/pull/4113 Add size checks for sparse tensor constructor | |
https://github.com/pytorch/pytorch/pull/4096 Refactor generation of NN derivatives | |
https://github.com/pytorch/pytorch/pull/4095 Add python only default init expression; Implement stft, hann/hamming/bartlett window. | |
https://github.com/pytorch/pytorch/pull/4094 Implement pin_memory() as a NativeFunction | |
https://github.com/pytorch/pytorch/pull/4091 Enable half communication for distributed | |
https://github.com/pytorch/pytorch/pull/4089 Allow specification of bool defaults in native functions. | |
https://github.com/pytorch/pytorch/pull/4088 Expose resize_ and resize_as_ to Python | |
https://github.com/pytorch/pytorch/pull/4082 Implement Variable.__invert__ | |
https://github.com/pytorch/pytorch/pull/4080 Implement Variable.new | |
https://github.com/pytorch/pytorch/pull/4079 Add missing derivatives.yaml input | |
https://github.com/pytorch/pytorch/pull/4075 Implement neg for all types | |
https://github.com/pytorch/pytorch/pull/4074 Ensure RNNCell variants don't broadcast | |
https://github.com/pytorch/pytorch/pull/4063 Fix non-determinism in code generation scripts | |
https://github.com/pytorch/pytorch/pull/4062 Allow .view on noncontig tensors when certain conditions are met | |
https://github.com/pytorch/pytorch/pull/4061 Adding index_select to symbolic.py | |
https://github.com/pytorch/pytorch/pull/4059 throw new -> throw | |
https://github.com/pytorch/pytorch/pull/4057 Implement apply_, map_, and map2_ in Variable | |
https://github.com/pytorch/pytorch/pull/4045 Add Variable._cdata | |
https://github.com/pytorch/pytorch/pull/4044 handle requires_grad when creating buckets for distributed | |
https://github.com/pytorch/pytorch/pull/4043 Implement Variable.from_numpy | |
https://github.com/pytorch/pytorch/pull/4042 Enable OpenMP in fuser | |
https://github.com/pytorch/pytorch/pull/4038 Implement Variable.tolist() | |
https://github.com/pytorch/pytorch/pull/4036 Implement Variable.storage_type() | |
https://github.com/pytorch/pytorch/pull/4034 added AMSgrad optimizer to Adam and SparseAdam | |
https://github.com/pytorch/pytorch/pull/4026 Add debug and fix some bugs in CPU fuser | |
https://github.com/pytorch/pytorch/pull/4020 Delete _write_metadata and move _new_with_metadata_file into Python | |
https://github.com/pytorch/pytorch/pull/4016 Fix the symbolic for view | |
https://github.com/pytorch/pytorch/pull/4015 Add is_pinned, is_shared, and share_memory_ to Variable | |
https://github.com/pytorch/pytorch/pull/4011 Exclude attrs with invalid python variable names from __dir__ | |
https://github.com/pytorch/pytorch/pull/4006 Implement Variable.numpy() | |
https://github.com/pytorch/pytorch/pull/3978 Implement reparameterized gradient for Gamma sampler | |
https://github.com/pytorch/pytorch/pull/3972 Implements gradients calculation for trtrs | |
https://github.com/pytorch/pytorch/pull/3970 Replace Variable.volatile with torch.no_grad() | |
https://github.com/pytorch/pytorch/pull/3968 Add streams and comms as optional arguments | |
https://github.com/pytorch/pytorch/pull/3961 Add a CPU Fuser (single core) | |
https://github.com/pytorch/pytorch/pull/3952 Update CONTRIBUTING.md | |
https://github.com/pytorch/pytorch/pull/3943 Implement matmul as a native function; use it for Variable impl | |
https://github.com/pytorch/pytorch/pull/3932 use torch.cat in _flatten | |
https://github.com/pytorch/pytorch/pull/3926 Fix indexing with all zero ByteTensors | |
https://github.com/pytorch/pytorch/pull/3907 Remove Function::is_executable | |
https://github.com/pytorch/pytorch/pull/3903 Fix _analytical_jacobian to not require in-place grad accumulation | |
https://github.com/pytorch/pytorch/pull/3888 Clean up InputBuffer | |
https://github.com/pytorch/pytorch/pull/3881 Correct gradient of rosenbrock | |
https://github.com/pytorch/pytorch/pull/3875 Pad sequences and Pack sequences | |
https://github.com/pytorch/pytorch/pull/3866 Add interpreter support for Handles/PythonOp/CppOp | |
https://github.com/pytorch/pytorch/pull/3862 Set seed at top-level of common.py | |
https://github.com/pytorch/pytorch/pull/3856 Delete unused autograd functions | |
https://github.com/pytorch/pytorch/pull/3846 Use warp shuffles in cuda varInnermostDim | |
https://github.com/pytorch/pytorch/pull/3843 Fix CharType min and max values | |
https://github.com/pytorch/pytorch/pull/3838 Implement is_sparse/is_distributed as native functions, have cpu() not change density | |
https://github.com/pytorch/pytorch/pull/3837 - added size_splits to functional | |
https://github.com/pytorch/pytorch/pull/3831 Fix errors in previous DataChannelMPI refactor | |
https://github.com/pytorch/pytorch/pull/3820 Make integer parameters and buffers immune to float(), double() and half() | |
https://github.com/pytorch/pytorch/pull/3817 Improve DataChannelMPI | |
https://github.com/pytorch/pytorch/pull/3816 Add determinant function on variable; Add backward on svd | |
https://github.com/pytorch/pytorch/pull/3795 Reflect renaming of OS X to macOS | |
https://github.com/pytorch/pytorch/pull/3771 fix elapsed_us spelling | |
https://github.com/pytorch/pytorch/pull/3765 Implement Variable.storage() | |
https://github.com/pytorch/pytorch/pull/3754 CUDA mode profiler fixes | |
https://github.com/pytorch/pytorch/pull/3750 Add Tensor.slice() | |
https://github.com/pytorch/pytorch/pull/3734 Add cudaEvent support to the profiler | |
https://github.com/pytorch/pytorch/pull/3733 Fix symbolic for Embedding and Upsampling and improve error messages | |
https://github.com/pytorch/pytorch/pull/3727 Record stack traces for CppOp's | |
https://github.com/pytorch/pytorch/pull/3707 Implement VariableType::alias | |
https://github.com/pytorch/pytorch/pull/3706 Implement toBackend and toScalarType on VariableType | |
https://github.com/pytorch/pytorch/pull/3700 fix half uniform for cuda 7.5 | |
https://github.com/pytorch/pytorch/pull/3692 set CC and CXX only when it's empty | |
https://github.com/pytorch/pytorch/pull/3683 Split off in-place NN functions | |
https://github.com/pytorch/pytorch/pull/3681 Implement bmm symbolic | |
https://github.com/pytorch/pytorch/pull/3680 Propagate is_volatile to the base when performing in-place ops on views | |
https://github.com/pytorch/pytorch/pull/3679 Fix a reference cycle when in-place ops on views save the output | |
https://github.com/pytorch/pytorch/pull/3676 Move detach to variable | |
https://github.com/pytorch/pytorch/pull/3672 NativeFunctions: support backend-specific dispatch and SpatialRoIPooling | |
https://github.com/pytorch/pytorch/pull/3658 Cast tensors when loading optimizer state dicts | |
https://github.com/pytorch/pytorch/pull/3656 Solved boolean ambiguity for variables and tensors which contain one value. | |
https://github.com/pytorch/pytorch/pull/3629 fix for compilation problems with PRId64 format specifier | |
https://github.com/pytorch/pytorch/pull/3616 Allow 1->N broadcasts at the beginning and end to be fused | |
https://github.com/pytorch/pytorch/pull/3609 Ensure that Variables are at least one-dim in VariableType | |
https://github.com/pytorch/pytorch/pull/3602 Improvements around torch.cat on empty Variables | |
https://github.com/pytorch/pytorch/pull/3549 Previous PyTorch version info | |
https://github.com/pytorch/pytorch/pull/3532 Add reduce arg to BCELoss | |
https://github.com/pytorch/pytorch/pull/3526 Reuse intermediate results over multiple backwards grad_inputs | |
https://github.com/pytorch/pytorch/pull/3520 adds flag __CUDA_NO_HALF_OPERATORS__ | |
https://github.com/pytorch/pytorch/pull/3509 Optimizer: optimize transposes in variety of circumstances | |
https://github.com/pytorch/pytorch/pull/3505 improvements to ModuleList and ParameterList classes | |
https://github.com/pytorch/pytorch/pull/3501 implement `__dir__`for Variable | |
https://github.com/pytorch/pytorch/pull/3500 ignore digit in container's `__dir__` | |
https://github.com/pytorch/pytorch/pull/3480 Two miscellaneous fixes | |
https://github.com/pytorch/pytorch/pull/3465 Generate native functions with const ref Tensor arguments. | |
https://github.com/pytorch/pytorch/pull/3444 added missing arg and improved example clarity | |
https://github.com/pytorch/pytorch/pull/3435 Implemented NCCL Distributed Backend for PyTorch with new dist APIs | |
https://github.com/pytorch/pytorch/pull/3429 Allow empty index tensor for index_select | |
https://github.com/pytorch/pytorch/pull/3422 Register VariableType calls in autograd profiler | |
https://github.com/pytorch/pytorch/pull/3411 Clear out eigenvector tensor when eigenvector=F for symeig | |
https://github.com/pytorch/pytorch/pull/3409 Add Tensor Core ops to RNNs for Volta | |
https://github.com/pytorch/pytorch/pull/3408 lazy-load nvrtc and libcuda | |
https://github.com/pytorch/pytorch/pull/3384 Allow in-place operations on views | |
https://github.com/pytorch/pytorch/pull/3382 Implement reduce keyword for SmoothL1Loss | |
https://github.com/pytorch/pytorch/pull/3371 Pretty names: support names set via export or Variable constructor | |
https://github.com/pytorch/pytorch/pull/3370 Follow up #3211 (sparse broadcast_coalesced, reduce_add_coalesced) | |
https://github.com/pytorch/pytorch/pull/3366 Add reduce keyword to L1Loss | |
https://github.com/pytorch/pytorch/pull/3352 fix some more mathjax | |
https://github.com/pytorch/pytorch/pull/3341 add gumbel_softmax, based on Eric Jang's implementation | |
https://github.com/pytorch/pytorch/pull/3336 Prevent numerical issues with poisson_nll_loss when log_input=False | |
https://github.com/pytorch/pytorch/pull/3263 Add torch.take and Tensor.put_ | |
https://github.com/pytorch/pytorch/pull/3127 adaptive pooling supports only specifying size in certain dimension | |
https://github.com/pytorch/pytorch/pull/3016 Introduce scopes during tracing | |
https://github.com/pytorch/pytorch/pull/2953 WIP: add numpy() and from_numpy() to HalfTensor | |
https://github.com/pytorch/pytorch/pull/2764 [Done]parallelize elementwise operation with openmp |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment