Skip to content

Instantly share code, notes, and snippets.

@vanbasten23
Created December 15, 2022 23:49
Show Gist options
  • Save vanbasten23/ed8d0952c88029bf9d09b488db9ff167 to your computer and use it in GitHub Desktop.
Save vanbasten23/ed8d0952c88029bf9d09b488db9ff167 to your computer and use it in GitHub Desktop.
xw32, file=/workspaces/work/pytorch/torch/csrc/autograd/engine.cpp, line=809function=validate_outputs: is_same_shape=0
xw32, file=/workspaces/work/pytorch/torch/csrc/autograd/input_metadata.h, line=111function=is_expandable_to_shape: grad.is_nested()=0
xw32, file=/workspaces/work/pytorch/aten/src/ATen/ExpandUtils.h, line=504function=is_expandable_to: ndim=2, target_dim=2
xw32, file=/workspaces/work/pytorch/aten/src/ATen/ExpandUtils.h, line=511function=is_expandable_to: i=0, size=1, target=1
xw32, file=/workspaces/work/pytorch/c10/core/SymInt.cpp, line=99function=operator==: is_symbolic()=0, sci.is_symbolic()=0
xw32, file=/workspaces/work/pytorch/aten/src/ATen/ExpandUtils.h, line=516function=is_expandable_to: succeeded for i=.0
xw32, file=/workspaces/work/pytorch/aten/src/ATen/ExpandUtils.h, line=511function=is_expandable_to: i=1, size=<=80, target=80
xw32, file=/workspaces/work/pytorch/c10/core/SymInt.cpp, line=99function=operator==: is_symbolic()=1, sci.is_symbolic()=0
xw32, file=torch_xla/csrc/tensor.cpp, line=665function=eq:
xw32, file=torch_xla/csrc/tensor.cpp, line=757function=bool_:
xw32, file=torch_xla/csrc/ops/dynamic_ir.cpp, line=113function=getDynamicValue: dim_node_0->getDynamicValue()=79, dim_node_1->getDynamicValue()=80
xw32, file=/workspaces/work/pytorch/c10/core/SymInt.cpp, line=99function=operator==: is_symbolic()=1, sci.is_symbolic()=0
xw32, file=torch_xla/csrc/tensor.cpp, line=665function=eq:
xw32, file=torch_xla/csrc/tensor.cpp, line=757function=bool_:
xw32, file=torch_xla/csrc/ops/dynamic_ir.cpp, line=113function=getDynamicValue: dim_node_0->getDynamicValue()=79, dim_node_1->getDynamicValue()=1
xw32, file=/workspaces/work/pytorch/aten/src/ATen/ExpandUtils.h, line=513function=is_expandable_to: returning false for i=1
xw32, file=/workspaces/work/pytorch/torch/csrc/autograd/engine.cpp, line=813function=validate_outputs: metadata.is_expandable_to_shape(grad) evaluate to false
xw32, file=/workspaces/work/pytorch/torch/csrc/autograd/input_metadata.h, line=131function=incompatible_shape_error_message: grad.is_nested()=0
Traceback (most recent call last):
File "pytorch/xla/test/test_dynamic_shape_backward_models.py", line 77, in <module>
train(model, loss_fn=criterion, optimizer=optimizer)
File "pytorch/xla/test/test_dynamic_shape_backward_models.py", line 64, in train
loss.backward() # exception here.
File "/home/ptxla/.local/lib/python3.8/site-packages/torch/_tensor.py", line 484, in backward
torch.autograd.backward(
File "/home/ptxla/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Function SigmoidBackward0 returned an invalid gradient at index 0 - got [80, 1] but expected shape compatible with [<=80, 1]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment