Skip to content

Instantly share code, notes, and snippets.

@jerryzh168
Created August 3, 2021 21:39
Show Gist options
  • Save jerryzh168/09897c4574132e8c11eee4a43bbe5b6d to your computer and use it in GitHub Desktop.
Save jerryzh168/09897c4574132e8c11eee4a43bbe5b6d to your computer and use it in GitHub Desktop.
r* 7) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 10) [Convolution]_output.quant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 12) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 16) [Activation]_output.quant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 18) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 22) [Activation]_output.quant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 24) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 27) [Convolution]_output.quant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 29) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 33) [Activation]_output.quant.scale
[TensorRT] VERBOSE: Removing (Unnamed Layer* 35) [Quantize]_output.dequant.scale
[TensorRT] VERBOSE: QDQ graph optimizer forward pass - DQ motions and fusions
[TensorRT] VERBOSE: EltReluFusion: Fusing add_2 with relu_5
[TensorRT] VERBOSE: EltReluFusion: Fusing add_3 with relu_7
[TensorRT] VERBOSE: ConvReluFusion: Fusing conv2d_4 with relu_4
[TensorRT] VERBOSE: ConvReluFusion: Fusing conv2d_6 with relu_6
[TensorRT] VERBOSE: QDQ graph optimizer quantization pass - Generate quantized ops
[TensorRT] VERBOSE: QuantizeDoubleInputNodes: fusing (Unnamed Layer* 16) [Activation]_output.quant into add_2 + relu_5
[TensorRT] VERBOSE: QuantizeDoubleInputNodes: fusing ((Unnamed Layer* 12) [Quantize]_output.dequant and (Unnamed Layer* 1) [Quantize]_output.dequant) into add_2 + relu_5
[TensorRT] VERBOSE: Removing (Unnamed Layer* 16) [Activation]_output.quant
[TensorRT] VERBOSE: Removing (Unnamed Layer* 12) [Quantize]_output.dequant
[TensorRT] VERBOSE: Removing (Unnamed Layer* 1) [Quantize]_output.dequant
[TensorRT] VERBOSE: QuantizeDoubleInputNodes: fusing (Unnamed Layer* 33) [Activation]_output.quant into add_3 + relu_7
[TensorRT] VERBOSE: QuantizeDoubleInputNodes: fusing ((Unnamed Layer* 29) [Quantize]_output.dequant and (Unnamed Layer* 18) [Quantize]_output.dequant) into add_3 + relu_7
[TensorRT] VERBOSE: Removing (Unnamed Layer* 33) [Activation]_output.quant
[TensorRT] VERBOSE: Removing (Unnamed Layer* 29) [Quantize]_output.dequant
[TensorRT] VERBOSE: Removing (Unnamed Layer* 18) [Quantize]_output.dequant
[TensorRT] ERROR: 2: [graphOptimizer.cpp::sameExprValues::587] Error Code 2: Internal Error (Assertion lhs.expr failed.)
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values.
To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at caffe2/aten/src/ATen/native/BinaryOps.cpp:577.)
return torch.floor_divide(self, other)
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/converter/acc_ops/acc_op_converter.py:140: DeprecationWarning: Use add_convolution_nd instead.
layer = network.add_convolution(
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/converter/acc_ops/acc_op_converter.py:149: DeprecationWarning: Use stride_nd instead.
layer.stride = kwargs["stride"]
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/converter/acc_ops/acc_op_converter.py:150: DeprecationWarning: Use padding_nd instead.
layer.padding = kwargs["padding"]
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/converter/acc_ops/acc_op_converter.py:151: DeprecationWarning: Use dilation_nd instead.
layer.dilation = kwargs["dilation"]
/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/torch/fx/experimental/fx2trt/fx2trt.py:222: DeprecationWarning: Use build_serialized_network instead.
engine = self.builder.build_engine(self.network, builder_config)
Traceback (most recent call last):
File "<string>", line 38, in <module>
File "<string>", line 36, in __run
File "/usr/local/fbcode/platform009/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/fbcode/platform009/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/quantized_resnet_test.py", line 72, in <module>
int8_trt = build_int8_trt(rn18)
File "/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/deeplearning/trt/fx2trt/quantized_resnet_test.py", line 44, in build_int8_trt
engine, input_names, output_names = interp.run(fp16_mode=False, int8_mode=True)
File "/data/users/jerryzh/fbsource/fbcode/buck-out/opt/gen/aab7ed39/deeplearning/trt/fx2trt/quantized_resnet_test#link-tree/torch/fx/experimental/fx2trt/fx2trt.py", line 224, in run
assert(engine)
AssertionError
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment