Skip to content

Instantly share code, notes, and snippets.

@vanbasten23
Created January 7, 2023 00:06
Show Gist options
  • Save vanbasten23/873b44d3b798d287a1a9378d3b09c9c5 to your computer and use it in GitHub Desktop.
Save vanbasten23/873b44d3b798d287a1a9378d3b09c9c5 to your computer and use it in GitHub Desktop.
ERROR: test_backward_pass_with_dynamic_input_simple (__main__.TestDynamicShapeModels)
----------------------------------------------------------------------
Traceback (most recent call last):
File "pytorch/xla/test/test_dynamic_shape_models.py", line 110, in test_backward_pass_with_dynamic_input_simple
loss.backward()
File "/home/ptxla/.local/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward
torch.autograd.backward(
File "/home/ptxla/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: /workspaces/work/pytorch/xla/torch_xla/csrc/helpers.cpp:273 : Check failed: out_size <= size_at_dyndim / input_shape.dimensions( input_dynamic_dimension) (2 vs. 1)
*** Begin stack trace ***
tsl::CurrentStackTrace[abi:cxx11]()
torch_xla::XlaHelpers::GetDynamicReshapeInfo(xla::Shape const&, absl::lts_20220623::Span<long const>)
torch_xla::XlaHelpers::GetDynamicReshape(xla::Shape const&, absl::lts_20220623::Span<long const>)
torch_xla::Permute::MakePermuteShape(xla::Shape const&, absl::lts_20220623::Span<long const>)
torch_xla::ViewInfo::ViewInfo(torch_xla::ViewInfo::Type, xla::Shape, std::vector<long, std::allocator<long> >)
torch_xla::tensor_methods::transpose(c10::intrusive_ptr<torch_xla::XLATensor, c10::detail::intrusive_target_default_null_type<torch_xla::XLATensor> > const&, long, long)
torch_xla::XLANativeFunctions::t(at::Tensor const&)
at::_ops::t::redispatch(c10::DispatchKeySet, at::Tensor const&)
at::_ops::t::redispatch(c10::DispatchKeySet, at::Tensor const&)
at::_ops::t::call(at::Tensor const&)
torch::autograd::generated::AddmmBackward0::apply(std::vector<at::Tensor, std::allocator<at::Tensor> >&&)
torch::autograd::Engine::evaluate_function(std::shared_ptr<torch::autograd::GraphTask>&, torch::autograd::Node*, torch::autograd::InputBuffer&, std::shared_ptr<torch::autograd::ReadyQueue> const&)
torch::autograd::Engine::thread_main(std::shared_ptr<torch::autograd::GraphTask> const&)
torch::autograd::Engine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool)
torch::autograd::python::PythonEngine::thread_init(int, std::shared_ptr<torch::autograd::ReadyQueue> const&, bool)
clone
*** End stack trace ***
Unable to map dynamic dimension of shape f32[<=10,2]{1,0} to output sizes (2, 10)
----------------------------------------------------------------------
Ran 1 test in 0.483s
FAILED (errors=1)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment