Skip to content

Instantly share code, notes, and snippets.

@jerryzh168
Created August 12, 2024 20:07
Show Gist options
  • Save jerryzh168/58f5afc3e8884be7e3f55025a6187fa2 to your computer and use it in GitHub Desktop.
Save jerryzh168/58f5afc3e8884be7e3f55025a6187fa2 to your computer and use it in GitHub Desktop.
/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/ops.py:12: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
return torch.library.impl_abstract(f"{name}")(func)
W0812 13:06:21.489861 139713931614016 torch/_logging/_internal.py:416] Using TORCH_LOGS environment variable for log settings, ignoring call to set_logs
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] torchdynamo start compiling _quantized_linear_op /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:382, stack (elided 6 frames):
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/ao/test/integration/test_integration.py", line 1561, in <module>
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] unittest.main()
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/main.py", line 101, in __init__
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self.runTests()
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/main.py", line 271, in runTests
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self.result = testRunner.run(self.test)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/runner.py", line 184, in run
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 84, in __call__
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 122, in run
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 84, in __call__
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 122, in run
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 651, in __call__
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 592, in run
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self._callTestMethod(testMethod)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] method()
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/parameterized/parameterized.py", line 620, in standalone_func
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return func(*(a + p.args), **p.kwargs, **kw)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/ao/test/integration/test_integration.py", line 1499, in test_get_model_size_autoquant
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] mod(example_input)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self._call_impl(*args, **kwargs)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] args_kwargs_result = hook(self, args, kwargs) # type: ignore[misc]
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 608, in autoquant_prehook
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] module.finalize_autoquant()
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 620, in finalize_autoquant
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] _change_autoquantizable_to_quantized(
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 494, in _change_autoquantizable_to_quantized
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] _replace_with_custom_fn_if_matches_filter(
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/quant_api.py", line 176, in _replace_with_custom_fn_if_matches_filter
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] new_child = _replace_with_custom_fn_if_matches_filter(
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/quant_api.py", line 172, in _replace_with_custom_fn_if_matches_filter
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] model = replacement_fn(model)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/quant_api.py", line 222, in insert_subclass
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] getattr(cls, from_float)(lin.weight, **kwargs), requires_grad=False
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return func(*args, **kwargs)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 146, in to_quantized
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self.tune_autoquant(q_cls, shapes_and_dtype, time_for_best_shape)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 97, in tune_autoquant
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] res = q_cls._autoquant_test(act_mat, self.weight, bias, best_time, self.mode)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 409, in _autoquant_test
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return super()._autoquant_test(act_mat, *args)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 267, in _autoquant_test
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] res = do_autoquant_bench(q_c_op, act_mat, w_qtensor, bias, warmup=25, rep=100)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return func(*args, **kwargs)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py", line 218, in do_autoquant_bench
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] op(*args, **kwargs)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return fn(*args, **kwargs)
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self._torchdynamo_orig_callable(
V0812 13:06:21.510456 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0]
I0812 13:06:21.513082 139713931614016 torch/_dynamo/logging.py:56] [0/0] Step 1: torchdynamo start tracing _quantized_linear_op /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:382
V0812 13:06:21.514011 139713931614016 torch/fx/experimental/symbolic_shapes.py:2529] [0/0] create_env
V0812 13:06:21.523315 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:395 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.523315 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] orig_dtype = act_mat.dtype
V0812 13:06:21.548435 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST act_mat []
V0812 13:06:21.548632 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR dtype [LazyVariableTracker()]
V0812 13:06:21.549578 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_act_mat_ L['act_mat']
V0812 13:06:21.550365 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['act_mat'] (16, 128) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='act_mat', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0812 13:06:21.553598 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST orig_dtype [ConstantVariable()]
V0812 13:06:21.553829 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:396 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.553829 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] orig_shape = act_mat.shape
V0812 13:06:21.553982 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST act_mat []
V0812 13:06:21.554080 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR shape [TensorVariable()]
V0812 13:06:21.554447 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST orig_shape [SizeVariable()]
V0812 13:06:21.554572 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:397 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.554572 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] act_mat = act_mat.reshape(-1, act_mat.shape[-1], 1)
V0812 13:06:21.554683 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST act_mat []
V0812 13:06:21.554781 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR reshape [TensorVariable()]
V0812 13:06:21.555121 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST -1 [GetAttrVariable()]
V0812 13:06:21.555239 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST act_mat [GetAttrVariable(), ConstantVariable()]
V0812 13:06:21.555339 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR shape [GetAttrVariable(), ConstantVariable(), TensorVariable()]
V0812 13:06:21.555504 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST -1 [GetAttrVariable(), ConstantVariable(), SizeVariable()]
V0812 13:06:21.555595 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [GetAttrVariable(), ConstantVariable(), SizeVariable(), ConstantVariable()]
V0812 13:06:21.555802 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST 1 [GetAttrVariable(), ConstantVariable(), ConstantVariable()]
V0812 13:06:21.555900 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 3 [GetAttrVariable(), ConstantVariable(), ConstantVariable(), ConstantVariable()]
V0812 13:06:21.558909 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST act_mat [TensorVariable()]
V0812 13:06:21.559157 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.559157 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2)
V0812 13:06:21.559324 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST act_mat []
V0812 13:06:21.559445 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST w_qtensor [TensorVariable()]
V0812 13:06:21.559533 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR layout_tensor [TensorVariable(), LazyVariableTracker()]
V0812 13:06:21.559960 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_w_qtensor_ L['w_qtensor']
V0812 13:06:21.560666 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'] (128, 128) SubclassSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=LocalSource(local_name='w_qtensor', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}, inner_contexts={'layout_tensor': SubclassSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), shape_env_to_source_to_symbol_cache={}, inner_contexts={'int_data': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={}), 'scale': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={}), 'zero_point': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={})})}) <class 'torchao.quantization.autoquant.AQWeightOnlyQuantizedLinearWeight2'>
V0812 13:06:21.562460 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor (128, 128) SubclassSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.size()[0]": 128, "L['w_qtensor'].layout_tensor.size()[1]": 128, "L['w_qtensor'].layout_tensor.storage_offset()": 0}}, inner_contexts={'int_data': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={}), 'scale': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={}), 'zero_point': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={})}) <class 'torchao.dtypes.affine_quantized_tensor.PlainAQTLayout'>
V0812 13:06:21.564016 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.int_data (128, 128) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.int_data.size()[0]": 128, "L['w_qtensor'].layout_tensor.int_data.size()[1]": 128, "L['w_qtensor'].layout_tensor.int_data.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.564778 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.scale (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.scale.size()[0]": 128, "L['w_qtensor'].layout_tensor.scale.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.565435 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.zero_point (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.zero_point.size()[0]": 128, "L['w_qtensor'].layout_tensor.zero_point.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.566607 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_w_qtensor_layout_tensor L['w_qtensor'].layout_tensor
V0812 13:06:21.567148 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor (128, 128) SubclassSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.size()[0]": 128, "L['w_qtensor'].layout_tensor.size()[1]": 128, "L['w_qtensor'].layout_tensor.storage_offset()": 0}}, inner_contexts={'int_data': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.int_data.size()[0]": 128, "L['w_qtensor'].layout_tensor.int_data.size()[1]": 128, "L['w_qtensor'].layout_tensor.int_data.storage_offset()": 0}}), 'scale': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.scale.size()[0]": 128, "L['w_qtensor'].layout_tensor.scale.storage_offset()": 0}}), 'zero_point': StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.zero_point.size()[0]": 128, "L['w_qtensor'].layout_tensor.zero_point.storage_offset()": 0}})}) <class 'torchao.dtypes.affine_quantized_tensor.PlainAQTLayout'>
V0812 13:06:21.567540 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.int_data (128, 128) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.int_data.size()[0]": 128, "L['w_qtensor'].layout_tensor.int_data.size()[1]": 128, "L['w_qtensor'].layout_tensor.int_data.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.567753 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.scale (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.scale.size()[0]": 128, "L['w_qtensor'].layout_tensor.scale.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.567955 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.zero_point (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.zero_point.size()[0]": 128, "L['w_qtensor'].layout_tensor.zero_point.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.568401 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_w_qtensor_layout_tensor_int_data L['w_qtensor'].layout_tensor.int_data
V0812 13:06:21.568754 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.int_data (128, 128) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='int_data'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.int_data.size()[0]": 128, "L['w_qtensor'].layout_tensor.int_data.size()[1]": 128, "L['w_qtensor'].layout_tensor.int_data.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.569268 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_w_qtensor_layout_tensor_scale L['w_qtensor'].layout_tensor.scale
V0812 13:06:21.569647 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.scale (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='scale'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.scale.size()[0]": 128, "L['w_qtensor'].layout_tensor.scale.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.570112 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_w_qtensor_layout_tensor_zero_point L['w_qtensor'].layout_tensor.zero_point
V0812 13:06:21.570561 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['w_qtensor'].layout_tensor.zero_point (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=AttrSource(base=AttrSource(base=LocalSource(local_name='w_qtensor', cell_or_freevar=False), member='layout_tensor'), member='zero_point'), shape_env_to_source_to_symbol_cache={139709605740800: {"L['w_qtensor'].layout_tensor.zero_point.size()[0]": 128, "L['w_qtensor'].layout_tensor.zero_point.storage_offset()": 0}}) <class 'torch.Tensor'>
V0812 13:06:21.571532 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR int_data [TensorVariable(), TensorVariable()]
V0812 13:06:21.571931 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR t [TensorVariable(), TensorVariable()]
V0812 13:06:21.572325 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 0 [TensorVariable(), GetAttrVariable()]
V0812 13:06:21.574618 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR unsqueeze [TensorVariable(), TensorVariable()]
V0812 13:06:21.574874 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST 0 [TensorVariable(), GetAttrVariable()]
V0812 13:06:21.574996 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [TensorVariable(), GetAttrVariable(), ConstantVariable()]
V0812 13:06:21.576864 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BINARY_MULTIPLY None [TensorVariable(), TensorVariable()]
V0812 13:06:21.580735 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR sum [TensorVariable()]
V0812 13:06:21.581089 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST -2 [GetAttrVariable()]
V0812 13:06:21.581210 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST ('dim',) [GetAttrVariable(), ConstantVariable()]
V0812 13:06:21.581370 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_KW 1 [GetAttrVariable(), ConstantVariable(), TupleVariable()]
V0812 13:06:21.583681 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST y [TensorVariable()]
V0812 13:06:21.583922 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:399 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.583922 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] y = y.reshape(*orig_shape[:-1], y.shape[-1]) * w_qtensor.layout_tensor.scale
V0812 13:06:21.584090 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST y []
V0812 13:06:21.584184 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR reshape [TensorVariable()]
V0812 13:06:21.584381 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BUILD_LIST 0 [GetAttrVariable()]
V0812 13:06:21.584517 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST orig_shape [GetAttrVariable(), ListVariable(length=0)]
V0812 13:06:21.584607 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST None [GetAttrVariable(), ListVariable(length=0), SizeVariable()]
V0812 13:06:21.584696 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST -1 [GetAttrVariable(), ListVariable(length=0), SizeVariable(), ConstantVariable()]
V0812 13:06:21.584788 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BUILD_SLICE 2 [GetAttrVariable(), ListVariable(length=0), SizeVariable(), ConstantVariable(), ConstantVariable()]
V0812 13:06:21.584898 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [GetAttrVariable(), ListVariable(length=0), SizeVariable(), SliceVariable()]
V0812 13:06:21.585119 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LIST_EXTEND 1 [GetAttrVariable(), ListVariable(length=0), SizeVariable()]
V0812 13:06:21.585262 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST y [GetAttrVariable(), ListVariable(length=1)]
V0812 13:06:21.585358 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR shape [GetAttrVariable(), ListVariable(length=1), TensorVariable()]
V0812 13:06:21.585485 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST -1 [GetAttrVariable(), ListVariable(length=1), SizeVariable()]
V0812 13:06:21.585565 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BINARY_SUBSCR None [GetAttrVariable(), ListVariable(length=1), SizeVariable(), ConstantVariable()]
V0812 13:06:21.585664 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LIST_APPEND 1 [GetAttrVariable(), ListVariable(length=1), ConstantVariable()]
V0812 13:06:21.585748 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LIST_TO_TUPLE None [GetAttrVariable(), ListVariable(length=2)]
V0812 13:06:21.585913 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_EX 0 [GetAttrVariable(), TupleVariable()]
V0812 13:06:21.587435 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST w_qtensor [TensorVariable()]
V0812 13:06:21.587620 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR layout_tensor [TensorVariable(), TensorVariable()]
V0812 13:06:21.588115 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR scale [TensorVariable(), TensorVariable()]
V0812 13:06:21.588528 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BINARY_MULTIPLY None [TensorVariable(), TensorVariable()]
V0812 13:06:21.590674 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST y [TensorVariable()]
V0812 13:06:21.590868 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:400 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.590868 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] if bias is not None:
V0812 13:06:21.590996 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST bias []
V0812 13:06:21.591086 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_CONST None [LazyVariableTracker()]
V0812 13:06:21.591168 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE IS_OP 1 [LazyVariableTracker(), ConstantVariable()]
V0812 13:06:21.591427 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_bias_ L['bias']
V0812 13:06:21.591775 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['bias'] (128,) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>], constraint_sizes=[None], view_base_context=None, tensor_source=LocalSource(local_name='bias', cell_or_freevar=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
V0812 13:06:21.592860 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE POP_JUMP_IF_FALSE 120 [ConstantVariable()]
V0812 13:06:21.593034 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:401 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.593034 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] y += bias
V0812 13:06:21.593162 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST y []
V0812 13:06:21.593248 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST bias [TensorVariable()]
V0812 13:06:21.593345 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE INPLACE_ADD None [TensorVariable(), TensorVariable()]
V0812 13:06:21.594603 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE STORE_FAST y [TensorVariable()]
V0812 13:06:21.594789 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:402 in _quantized_linear_op (AQWeightOnlyQuantizedLinearWeight2._quantized_linear_op)
V0812 13:06:21.594789 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] return y.to(orig_dtype)
V0812 13:06:21.594906 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST y []
V0812 13:06:21.594990 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_ATTR to [TensorVariable()]
V0812 13:06:21.595152 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST orig_dtype [GetAttrVariable()]
V0812 13:06:21.595242 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION 1 [GetAttrVariable(), ConstantVariable()]
V0812 13:06:21.595899 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE RETURN_VALUE None [TensorVariable()]
I0812 13:06:21.596064 139713931614016 torch/_dynamo/logging.py:56] [0/0] Step 1: torchdynamo done tracing _quantized_linear_op (RETURN_VALUE)
V0812 13:06:21.596155 139713931614016 torch/_dynamo/symbolic_convert.py:2626] [0/0] RETURN_VALUE triggered compile
V0812 13:06:21.596523 139713931614016 torch/_dynamo/output_graph.py:972] [0/0] COMPILING GRAPH due to GraphCompileReason(reason='return_value', user_stack=[<FrameSummary file /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py, line 402 in _quantized_linear_op>], graph_break=False)
V0812 13:06:21.600045 139713931614016 torch/_dynamo/output_graph.py:1542] [0/0] REMOVE UNUSED GRAPHARG L['w_qtensor']
V0812 13:06:21.600266 139713931614016 torch/_dynamo/output_graph.py:1542] [0/0] REMOVE UNUSED GRAPHARG L['w_qtensor'].layout_tensor
V0812 13:06:21.600491 139713931614016 torch/_dynamo/output_graph.py:1542] [0/0] REMOVE UNUSED GRAPHARG L['w_qtensor'].layout_tensor.zero_point
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] TRACED GRAPH
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] ===== __compiled_fn_1 =====
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/fx/_lazy_graph_module.py class GraphModule(torch.nn.Module):
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] def forward(self, L_act_mat_: "bf16[16, 128][128, 1]cuda:0", L_w_qtensor_layout_tensor_int_data: "i8[128, 128][128, 1]cuda:0", L_w_qtensor_layout_tensor_scale: "bf16[128][1]cuda:0", L_bias_: "bf16[128][1]cuda:0"):
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] l_act_mat_ = L_act_mat_
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] l_w_qtensor_layout_tensor_int_data = L_w_qtensor_layout_tensor_int_data
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] l_w_qtensor_layout_tensor_scale = L_w_qtensor_layout_tensor_scale
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] l_bias_ = L_bias_
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] # File: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:397 in _quantized_linear_op, code: act_mat = act_mat.reshape(-1, act_mat.shape[-1], 1)
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] act_mat: "bf16[16, 128, 1][128, 1, 1]cuda:0" = l_act_mat_.reshape(-1, 128, 1); l_act_mat_ = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] # File: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op, code: y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2)
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] t: "i8[128, 128][1, 128]cuda:0" = l_w_qtensor_layout_tensor_int_data.t(); l_w_qtensor_layout_tensor_int_data = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] unsqueeze: "i8[1, 128, 128][128, 1, 128]cuda:0" = t.unsqueeze(0); t = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] mul: "bf16[16, 128, 128][16384, 1, 128]cuda:0" = act_mat * unsqueeze; act_mat = unsqueeze = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] y: "bf16[16, 128][128, 1]cuda:0" = mul.sum(dim = -2); mul = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] # File: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:399 in _quantized_linear_op, code: y = y.reshape(*orig_shape[:-1], y.shape[-1]) * w_qtensor.layout_tensor.scale
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] reshape_1: "bf16[16, 128][128, 1]cuda:0" = y.reshape(16, 128); y = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] y_1: "bf16[16, 128][128, 1]cuda:0" = reshape_1 * l_w_qtensor_layout_tensor_scale; reshape_1 = l_w_qtensor_layout_tensor_scale = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] # File: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:401 in _quantized_linear_op, code: y += bias
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] y_1 += l_bias_; y_2: "bf16[16, 128][128, 1]cuda:0" = y_1; y_1 = l_bias_ = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] # File: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:402 in _quantized_linear_op, code: return y.to(orig_dtype)
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] to: "bf16[16, 128][128, 1]cuda:0" = y_2.to(torch.bfloat16); y_2 = None
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code] return (to,)
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
V0812 13:06:21.600896 139713931614016 torch/_dynamo/output_graph.py:1291] [0/0] [__graph_code]
I0812 13:06:21.603430 139713931614016 torch/_dynamo/logging.py:56] [0/0] Step 2: calling compiler function inductor
V0812 13:06:22.300398 139713931614016 torch/fx/experimental/symbolic_shapes.py:5167] [0/0] eval True == True [statically known]
I0812 13:06:24.894164 139713931614016 torch/_dynamo/logging.py:56] [0/0] Step 2: done compiler function inductor
I0812 13:06:24.902543 139713931614016 torch/fx/experimental/symbolic_shapes.py:3639] [0/0] produce_guards
V0812 13:06:24.902872 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['act_mat'].size()[0] 16 None
V0812 13:06:24.903014 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['act_mat'].size()[1] 128 None
V0812 13:06:24.903107 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['act_mat'].stride()[0] 128 None
V0812 13:06:24.903193 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['act_mat'].stride()[1] 1 None
V0812 13:06:24.903273 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['act_mat'].storage_offset() 0 None
V0812 13:06:24.903444 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[0] 128 None
V0812 13:06:24.903531 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[1] 128 None
V0812 13:06:24.903610 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[0] 128 None
V0812 13:06:24.903685 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[1] 1 None
V0812 13:06:24.903760 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.storage_offset() 0 None
V0812 13:06:24.903860 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.size()[0] 128 None
V0812 13:06:24.903939 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.stride()[0] 1 None
V0812 13:06:24.904014 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.storage_offset() 0 None
V0812 13:06:24.904096 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.size()[0] 128 None
V0812 13:06:24.904173 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.stride()[0] 1 None
V0812 13:06:24.904257 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.storage_offset() 0 None
V0812 13:06:24.904401 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[0] 128 None
V0812 13:06:24.904479 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[1] 128 None
V0812 13:06:24.904552 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[0] 128 None
V0812 13:06:24.904621 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[1] 1 None
V0812 13:06:24.904712 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.storage_offset() 0 None
V0812 13:06:24.904806 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[0] 128 None
V0812 13:06:24.904879 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[1] 128 None
V0812 13:06:24.904949 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[0] 128 None
V0812 13:06:24.905018 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[1] 1 None
V0812 13:06:24.905089 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.storage_offset() 0 None
V0812 13:06:24.905165 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.size()[0] 128 None
V0812 13:06:24.905237 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.stride()[0] 1 None
V0812 13:06:24.905322 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.storage_offset() 0 None
V0812 13:06:24.905399 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.size()[0] 128 None
V0812 13:06:24.905472 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.stride()[0] 1 None
V0812 13:06:24.905544 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.storage_offset() 0 None
V0812 13:06:24.905685 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].size()[0] 128 None
V0812 13:06:24.905778 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].size()[1] 128 None
V0812 13:06:24.905867 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].stride()[0] 128 None
V0812 13:06:24.905983 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].stride()[1] 1 None
V0812 13:06:24.906062 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].storage_offset() 0 None
V0812 13:06:24.906140 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[0] 128 None
V0812 13:06:24.906217 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[1] 128 None
V0812 13:06:24.906288 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[0] 128 None
V0812 13:06:24.906423 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[1] 1 None
V0812 13:06:24.906502 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.storage_offset() 0 None
V0812 13:06:24.906586 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[0] 128 None
V0812 13:06:24.906666 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[1] 128 None
V0812 13:06:24.906742 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[0] 128 None
V0812 13:06:24.906829 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[1] 1 None
V0812 13:06:24.906904 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.storage_offset() 0 None
V0812 13:06:24.907004 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.size()[0] 128 None
V0812 13:06:24.907085 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.stride()[0] 1 None
V0812 13:06:24.907161 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.storage_offset() 0 None
V0812 13:06:24.907249 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.size()[0] 128 None
V0812 13:06:24.907342 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.stride()[0] 1 None
V0812 13:06:24.907424 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.storage_offset() 0 None
V0812 13:06:24.907544 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[0] 128 None
V0812 13:06:24.907641 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.size()[1] 128 None
V0812 13:06:24.907717 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[0] 128 None
V0812 13:06:24.907805 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.stride()[1] 1 None
V0812 13:06:24.907881 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.storage_offset() 0 None
V0812 13:06:24.907959 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[0] 128 None
V0812 13:06:24.908039 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[1] 128 None
V0812 13:06:24.908113 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[0] 128 None
V0812 13:06:24.908188 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[1] 1 None
V0812 13:06:24.908263 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.storage_offset() 0 None
V0812 13:06:24.908358 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.size()[0] 128 None
V0812 13:06:24.908439 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.stride()[0] 1 None
V0812 13:06:24.908514 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.storage_offset() 0 None
V0812 13:06:24.908592 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.size()[0] 128 None
V0812 13:06:24.908672 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.stride()[0] 1 None
V0812 13:06:24.908787 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.storage_offset() 0 None
V0812 13:06:24.908871 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[0] 128 None
V0812 13:06:24.908942 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.size()[1] 128 None
V0812 13:06:24.909019 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[0] 128 None
V0812 13:06:24.909101 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.stride()[1] 1 None
V0812 13:06:24.909176 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.int_data.storage_offset() 0 None
V0812 13:06:24.909262 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.size()[0] 128 None
V0812 13:06:24.909352 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.stride()[0] 1 None
V0812 13:06:24.909430 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.scale.storage_offset() 0 None
V0812 13:06:24.909512 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.size()[0] 128 None
V0812 13:06:24.909585 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.stride()[0] 1 None
V0812 13:06:24.909663 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['w_qtensor'].layout_tensor.zero_point.storage_offset() 0 None
V0812 13:06:24.909770 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['bias'].size()[0] 128 None
V0812 13:06:24.909853 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['bias'].stride()[0] 1 None
V0812 13:06:24.909928 139713931614016 torch/fx/experimental/symbolic_shapes.py:3821] [0/0] track_symint L['bias'].storage_offset() 0 None
V0812 13:06:24.910512 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['act_mat'].size()[0] == 16
V0812 13:06:24.910695 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['act_mat'].size()[1] == 128
V0812 13:06:24.910823 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['act_mat'].stride()[0] == 128
V0812 13:06:24.910938 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['act_mat'].stride()[1] == 1
V0812 13:06:24.911082 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['act_mat'].storage_offset() == 0
V0812 13:06:24.911196 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[0] == 128
V0812 13:06:24.911290 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[1] == 128
V0812 13:06:24.911419 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[0] == 128
V0812 13:06:24.911529 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[1] == 1
V0812 13:06:24.911623 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.storage_offset() == 0
V0812 13:06:24.911715 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.size()[0] == 128
V0812 13:06:24.911822 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.stride()[0] == 1
V0812 13:06:24.911917 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.storage_offset() == 0
V0812 13:06:24.912008 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.size()[0] == 128
V0812 13:06:24.912136 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.stride()[0] == 1
V0812 13:06:24.912228 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.storage_offset() == 0
V0812 13:06:24.912379 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[0] == 128
V0812 13:06:24.912489 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[1] == 128
V0812 13:06:24.912583 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[0] == 128
V0812 13:06:24.912677 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[1] == 1
V0812 13:06:24.912783 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.storage_offset() == 0
V0812 13:06:24.912880 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[0] == 128
V0812 13:06:24.912974 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[1] == 128
V0812 13:06:24.913065 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[0] == 128
V0812 13:06:24.913159 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[1] == 1
V0812 13:06:24.913254 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.storage_offset() == 0
V0812 13:06:24.913363 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.size()[0] == 128
V0812 13:06:24.913459 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.stride()[0] == 1
V0812 13:06:24.913554 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.storage_offset() == 0
V0812 13:06:24.913671 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.size()[0] == 128
V0812 13:06:24.913786 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.stride()[0] == 1
V0812 13:06:24.913884 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.storage_offset() == 0
V0812 13:06:24.913977 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].size()[0] == 128
V0812 13:06:24.914068 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].size()[1] == 128
V0812 13:06:24.914182 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].stride()[0] == 128
V0812 13:06:24.914285 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].stride()[1] == 1
V0812 13:06:24.914388 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].storage_offset() == 0
V0812 13:06:24.914479 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[0] == 128
V0812 13:06:24.914567 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[1] == 128
V0812 13:06:24.914656 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[0] == 128
V0812 13:06:24.914744 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[1] == 1
V0812 13:06:24.914846 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.storage_offset() == 0
V0812 13:06:24.914948 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[0] == 128
V0812 13:06:24.915060 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[1] == 128
V0812 13:06:24.915152 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[0] == 128
V0812 13:06:24.915246 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[1] == 1
V0812 13:06:24.915356 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.storage_offset() == 0
V0812 13:06:24.915453 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.size()[0] == 128
V0812 13:06:24.915555 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.stride()[0] == 1
V0812 13:06:24.915645 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.storage_offset() == 0
V0812 13:06:24.915755 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.size()[0] == 128
V0812 13:06:24.915893 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.stride()[0] == 1
V0812 13:06:24.915993 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.storage_offset() == 0
V0812 13:06:24.916106 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[0] == 128
V0812 13:06:24.916213 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.size()[1] == 128
V0812 13:06:24.916322 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[0] == 128
V0812 13:06:24.916421 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.stride()[1] == 1
V0812 13:06:24.916518 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.storage_offset() == 0
V0812 13:06:24.916615 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[0] == 128
V0812 13:06:24.916711 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[1] == 128
V0812 13:06:24.916819 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[0] == 128
V0812 13:06:24.916917 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[1] == 1
V0812 13:06:24.917015 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.storage_offset() == 0
V0812 13:06:24.917111 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.size()[0] == 128
V0812 13:06:24.917207 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.stride()[0] == 1
V0812 13:06:24.917373 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.storage_offset() == 0
V0812 13:06:24.917548 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.size()[0] == 128
V0812 13:06:24.917675 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.stride()[0] == 1
V0812 13:06:24.917803 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.storage_offset() == 0
V0812 13:06:24.917906 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[0] == 128
V0812 13:06:24.918000 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.size()[1] == 128
V0812 13:06:24.918092 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[0] == 128
V0812 13:06:24.918186 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.stride()[1] == 1
V0812 13:06:24.918281 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.int_data.storage_offset() == 0
V0812 13:06:24.918415 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.size()[0] == 128
V0812 13:06:24.918514 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.stride()[0] == 1
V0812 13:06:24.918612 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.scale.storage_offset() == 0
V0812 13:06:24.918709 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.size()[0] == 128
V0812 13:06:24.918826 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.stride()[0] == 1
V0812 13:06:24.918987 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['w_qtensor'].layout_tensor.zero_point.storage_offset() == 0
V0812 13:06:24.919149 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['bias'].size()[0] == 128
V0812 13:06:24.919291 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['bias'].stride()[0] == 1
V0812 13:06:24.919413 139713931614016 torch/fx/experimental/symbolic_shapes.py:3985] [0/0] Skipping guard L['bias'].storage_offset() == 0
V0812 13:06:24.920094 139713931614016 torch/_dynamo/guards.py:2169] [0/0] [__guards] GUARDS:
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards]
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] TREE_GUARD_MANAGER:
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] +- RootGuardManager
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | +- DEFAULT_DEVICE: utils_device.CURRENT_DEVICE == None # _dynamo/output_graph.py:460 in init_ambient_guards
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | +- GLOBAL_STATE: ___check_global_state()
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | +- GuardManager: source=L['bias'], accessed_by=DictGetItemGuardAccessor(bias)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['bias'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.bfloat16, device=0, requires_grad=False, size=[128], stride=[1]) # if bias is not None: # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:400 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['bias'], '_dynamo_dynamic_indices') == False # if bias is not None: # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:400 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | +- GuardManager: source=L['act_mat'], accessed_by=DictGetItemGuardAccessor(act_mat)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['act_mat'], Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.bfloat16, device=0, requires_grad=False, size=[16, 128], stride=[128, 1]) # orig_dtype = act_mat.dtype # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:395 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['act_mat'], '_dynamo_dynamic_indices') == False # orig_dtype = act_mat.dtype # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:395 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | +- GuardManager: source=L['w_qtensor'], accessed_by=DictGetItemGuardAccessor(w_qtensor)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- TYPE_MATCH: ___check_type_id(L['w_qtensor'], 94686462456592) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- TENSOR_MATCH: check_tensor(L['w_qtensor'], AQWeightOnlyQuantizedLinearWeight2, DispatchKeySet(CUDA, BackendSelect, Python, ADInplaceOrView, AutogradCUDA, PythonTLSSnapshot), torch.bfloat16, device=0, requires_grad=False, size=[128, 128], stride=[128, 1]) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_HASATTR: hasattr(L['w_qtensor'], '_dynamo_dynamic_indices') == False # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | +- GuardManager: source=L['w_qtensor'].layout_tensor, accessed_by=GetAttrGuardAccessor(layout_tensor)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- TYPE_MATCH: ___check_type_id(L['w_qtensor'].layout_tensor, 94686461740416) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- TENSOR_MATCH: check_tensor(L['w_qtensor'].layout_tensor, PlainAQTLayout, DispatchKeySet(CUDA, BackendSelect, Python, ADInplaceOrView, AutogradCUDA, PythonTLSSnapshot), torch.int8, device=0, requires_grad=False, size=[128, 128], stride=[128, 1]) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- NO_HASATTR: hasattr(L['w_qtensor'].layout_tensor, '_dynamo_dynamic_indices') == False # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- GuardManager: source=L['w_qtensor'].layout_tensor.scale, accessed_by=GetAttrGuardAccessor(scale)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- TENSOR_MATCH: check_tensor(L['w_qtensor'].layout_tensor.scale, Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.bfloat16, device=0, requires_grad=False, size=[128], stride=[1]) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_HASATTR: hasattr(L['w_qtensor'].layout_tensor.scale, '_dynamo_dynamic_indices') == False # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- GuardManager: source=L['w_qtensor'].layout_tensor.int_data, accessed_by=GetAttrGuardAccessor(int_data)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- TENSOR_MATCH: check_tensor(L['w_qtensor'].layout_tensor.int_data, Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.int8, device=0, requires_grad=False, size=[128, 128], stride=[128, 1]) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_HASATTR: hasattr(L['w_qtensor'].layout_tensor.int_data, '_dynamo_dynamic_indices') == False # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | +- GuardManager: source=L['w_qtensor'].layout_tensor.zero_point, accessed_by=GetAttrGuardAccessor(zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- TENSOR_MATCH: check_tensor(L['w_qtensor'].layout_tensor.zero_point, Tensor, DispatchKeySet(CUDA, BackendSelect, ADInplaceOrView, AutogradCUDA), torch.int64, device=0, requires_grad=False, size=[128], stride=[1]) # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_HASATTR: hasattr(L['w_qtensor'].layout_tensor.zero_point, '_dynamo_dynamic_indices') == False # y = (act_mat*w_qtensor.layout_tensor.int_data.t().unsqueeze(0)).sum(dim=-2) # torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/quantization/autoquant.py:398 in _quantized_linear_op
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards] | | | | +- NO_TENSOR_ALIASING: check_no_aliasing(L['bias'], L['act_mat'], L['w_qtensor'], L['w_qtensor'].layout_tensor, L['w_qtensor'].layout_tensor.scale, L['w_qtensor'].layout_tensor.int_data, L['w_qtensor'].layout_tensor.zero_point)
V0812 13:06:24.920347 139713931614016 torch/_dynamo/guards.py:2148] [0/0] [__guards]
V0812 13:06:24.923117 139713931614016 torch/_dynamo/convert_frame.py:1082] skipping: _fn (reason: in skipfiles, file: /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py)
W0812 13:06:26.775476 139713931614016 torch/_logging/_internal.py:416] Using TORCH_LOGS environment variable for log settings, ignoring call to set_logs
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] torchdynamo start compiling inner /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/external_utils.py:36, stack (elided 6 frames):
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/ao/test/integration/test_integration.py", line 1561, in <module>
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] unittest.main()
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/main.py", line 101, in __init__
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self.runTests()
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/main.py", line 271, in runTests
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self.result = testRunner.run(self.test)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/runner.py", line 184, in run
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 84, in __call__
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 122, in run
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 84, in __call__
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/suite.py", line 122, in run
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] test(result)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 651, in __call__
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self.run(*args, **kwds)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 592, in run
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] self._callTestMethod(testMethod)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/unittest/case.py", line 550, in _callTestMethod
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] method()
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/parameterized/parameterized.py", line 620, in standalone_func
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return func(*(a + p.args), **p.kwargs, **kw)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/ao/test/integration/test_integration.py", line 1499, in test_get_model_size_autoquant
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] mod(example_input)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self._call_impl(*args, **kwargs)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1603, in _call_impl
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] result = forward_call(*args, **kwargs)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return fn(*args, **kwargs)
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0] return self._torchdynamo_orig_callable(
V0812 13:06:26.776892 139713931614016 torch/_dynamo/convert_frame.py:776] [0/0]
I0812 13:06:26.777700 139713931614016 torch/_dynamo/logging.py:56] [0/0] Step 1: torchdynamo start tracing inner /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/external_utils.py:36
V0812 13:06:26.778169 139713931614016 torch/fx/experimental/symbolic_shapes.py:2529] [0/0] create_env
V0812 13:06:26.779340 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] TRACE starts_line /home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/external_utils.py:38 in inner (wrap_inline.inner)
V0812 13:06:26.779340 139713931614016 torch/_dynamo/symbolic_convert.py:775] [0/0] [__trace_source] return fn(*args, **kwargs)
V0812 13:06:26.782714 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_DEREF fn []
V0812 13:06:26.782876 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST args [LazyVariableTracker()]
V0812 13:06:26.783027 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE BUILD_MAP 0 [LazyVariableTracker(), LazyVariableTracker()]
V0812 13:06:26.783227 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE LOAD_FAST kwargs [LazyVariableTracker(), LazyVariableTracker(), ConstDictVariable()]
V0812 13:06:26.783357 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE DICT_MERGE 1 [LazyVariableTracker(), LazyVariableTracker(), ConstDictVariable(), LazyVariableTracker()]
V0812 13:06:26.783895 139713931614016 torch/_dynamo/symbolic_convert.py:798] [0/0] [__trace_bytecode] TRACE CALL_FUNCTION_EX 1 [LazyVariableTracker(), LazyVariableTracker(), ConstDictVariable()]
V0812 13:06:26.785496 139713931614016 torch/_dynamo/output_graph.py:2033] [0/0] create_graph_input L_args_0_ L['args'][0]
V0812 13:06:26.786081 139713931614016 torch/_dynamo/variables/builder.py:2268] [0/0] wrap_to_fake L['args'][0] (16, 128) StatefulSymbolicContext(dynamic_sizes=[<DimDynamic.STATIC: 2>, <DimDynamic.STATIC: 2>], constraint_sizes=[None, None], view_base_context=None, tensor_source=GetItemSource(base=LocalSource(local_name='args', cell_or_freevar=False), index=0, index_is_slice=False), shape_env_to_source_to_symbol_cache={}) <class 'torch.Tensor'>
E
======================================================================
ERROR: test_get_model_size_autoquant_5_cuda (__main__.TestUtils)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/parameterized/parameterized.py", line 620, in standalone_func
return func(*(a + p.args), **p.kwargs, **kw)
File "/home/jerryzh/ao/test/integration/test_integration.py", line 1499, in test_get_model_size_autoquant
mod(example_input)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1603, in _call_impl
result = forward_call(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn
return fn(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 1116, in __call__
return self._torchdynamo_orig_callable(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 948, in __call__
result = self._inner_convert(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 472, in __call__
return _compile(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_utils_internal.py", line 84, in wrapper_function
return StrobelightCompileTimeProfiler.profile_compile_time(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_strobelight/compile_time_profiler.py", line 129, in profile_compile_time
return func(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 817, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 231, in time_wrapper
r = func(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 636, in compile_inner
out_code = transform_code_object(code, transform)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/bytecode_transformation.py", line 1185, in transform_code_object
transformations(instructions, code_options)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 178, in _fn
return fn(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/convert_frame.py", line 582, in transform
tracer.run()
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 2451, in run
super().run()
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 893, in run
while self.step():
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 805, in step
self.dispatch_table[inst.opcode](self, inst)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 499, in wrapper
return inner_fn(self, inst)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 1500, in CALL_FUNCTION_EX
self.call_function(fn, argsvars.items, kwargsvars)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 743, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/variables/lazy.py", line 132, in realize_and_forward
return getattr(self.realize(), name)(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 366, in call_function
tx.call_function(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/symbolic_convert.py", line 743, in call_function
self.push(fn.call_function(self, args, kwargs))
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/variables/nn_module.py", line 409, in call_function
return wrap_fx_proxy(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1713, in wrap_fx_proxy
return wrap_fx_proxy_cls(target_cls=TensorVariable, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/variables/builder.py", line 1798, in wrap_fx_proxy_cls
example_value = get_fake_value(proxy.node, tx, allow_non_graph_fake=True)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1853, in get_fake_value
raise TorchRuntimeError(str(e)).with_traceback(e.__traceback__) from None
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1785, in get_fake_value
ret_val = wrap_fake_exception(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1300, in wrap_fake_exception
return fn()
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1786, in <lambda>
lambda: run_node(tx.output, node, args, kwargs, nnmodule)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1921, in run_node
raise RuntimeError(make_error_message(e)).with_traceback(
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/utils.py", line 1908, in run_node
return nnmodule(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/nn/modules/linear.py", line 117, in forward
return F.linear(input, self.weight, self.bias)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/dtypes/utils.py", line 54, in _dispatch__torch_function__
return cls._ATEN_OP_OR_TORCH_FN_TABLE[func](func, types, args, kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/dtypes/utils.py", line 37, in wrapper
return func(f, types, args, kwargs)
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/dtypes/affine_quantized_tensor.py", line 844, in _
weight_tensor = weight_tensor.dequantize()
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torchao-0.4.0+gitd3b8d43-py3.9-linux-x86_64.egg/torchao/dtypes/affine_quantized_tensor.py", line 155, in dequantize
int_data, scale, zero_point = self.layout_tensor.get_plain()
torch._dynamo.exc.TorchRuntimeError: Failed running call_module fn_1(*(FakeTensor(..., device='cuda:0', size=(16, 128), dtype=torch.bfloat16),), **{}):
'FakeTensor' object has no attribute 'get_plain'
from user code:
File "/home/jerryzh/anaconda3/envs/ao_new/lib/python3.9/site-packages/torch/_dynamo/external_utils.py", line 38, in inner
return fn(*args, **kwargs)
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
----------------------------------------------------------------------
Ran 1 test in 11.173s
FAILED (errors=1)
I0812 13:06:26.802663 139713931614016 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:
I0812 13:06:26.802663 139713931614016 torch/_dynamo/utils.py:335] Function Runtimes (s)
I0812 13:06:26.802663 139713931614016 torch/_dynamo/utils.py:335] ------------------------------- --------------
I0812 13:06:26.802663 139713931614016 torch/_dynamo/utils.py:335] _compile.<locals>.compile_inner 0
V0812 13:06:26.802917 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:26.803025 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=14, misses=1, maxsize=256, currsize=1)
V0812 13:06:26.803108 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:26.803186 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:26.803320 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:26.803404 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:26.803472 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:26.803539 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=1, maxsize=None, currsize=1)
V0812 13:06:26.803606 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:26.803671 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=1, maxsize=None, currsize=1)
V0812 13:06:26.803736 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=1, maxsize=None, currsize=1)
V0812 13:06:26.803818 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:26.803886 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=3, misses=2, maxsize=None, currsize=2)
V0812 13:06:26.803952 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=1, misses=1, maxsize=256, currsize=1)
V0812 13:06:26.804018 139713931614016 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
I0812 13:06:29.281517 140027842737984 torch/_dynamo/utils.py:335] TorchDynamo compilation metrics:
I0812 13:06:29.281517 140027842737984 torch/_dynamo/utils.py:335] Function Runtimes (s)
I0812 13:06:29.281517 140027842737984 torch/_dynamo/utils.py:335] ---------- --------------
V0812 13:06:29.281978 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats constrain_symbol_range: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282101 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats evaluate_expr: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:29.282184 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _simplify_floor_div: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282260 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_guard_rel: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:29.282350 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _find: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282423 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats has_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:29.282493 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats size_hint: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:29.282562 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats simplify: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282630 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _update_divisible: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282696 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats replace: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282790 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats _maybe_evaluate_static: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282869 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_implications: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.282940 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats get_axioms: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
V0812 13:06:29.283012 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats safe_expand: CacheInfo(hits=0, misses=0, maxsize=256, currsize=0)
V0812 13:06:29.283083 140027842737984 torch/fx/experimental/symbolic_shapes.py:116] lru_cache_stats uninteresting_files: CacheInfo(hits=0, misses=0, maxsize=None, currsize=0)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment