Created
May 8, 2024 23:24
-
-
Save archana-ramalingam/c01d32972ffe292f96509728d7279220 to your computer and use it in GitHub Desktop.
SHARK-TestSuite e2e tests - Non-deterministic behavior consecutive runs
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
***************ReduceLogSum first run**************** | |
(e2e) aramalin@aramalin-navi3:~/Documents/Nod/SHARK-TestSuite/e2eshark$ PYTHONPATH="/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir:$PYTHONPATH" python ./run.py -c /home/aramalin/Documents/Nod/torch-mlir/build -i /home/aramalin/Documents/Nod/iree-build --tests onnx/operators/ReduceLogSum --verbose --cachedir /tmp/ --verbose --torchtolinalg | |
Starting e2eshark tests. Using 4 processes | |
Cache Directory: /tmp | |
Test run with arguments: {'backend': 'llvm-cpu', 'todtype': 'default', 'frameworks': ['pytorch'], 'groups': ['operators', 'combinations'], 'ireebuild': '/home/aramalin/Documents/Nod/iree-build', 'jobs': 4, 'torchmlirbuild': '/home/aramalin/Documents/Nod/torch-mlir/build', 'torchtolinalg': True, 'mode': 'onnx', 'norun': False, 'postprocess': False, 'report': False, 'reportformat': 'pipe', 'runfrom': 'model-run', 'runupto': 'inference', 'rundirectory': 'test-run', 'skiptestsfile': None, 'uploadtestsfile': None, 'tests': ['onnx/operators/ReduceLogSum'], 'testsfile': None, 'tolerance': None, 'torchmlirimport': 'fximport', 'verbose': True, 'zerotolerance': False, 'cachedir': '/tmp/', 'cleanup': False} | |
Torch MLIR build: /home/aramalin/Documents/Nod/torch-mlir/build | |
IREE build: /home/aramalin/Documents/Nod/iree-build | |
Test run directory: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run | |
Since --tests or --testsfile was specified, --groups ignored | |
Framework:onnx mode=onnx backend=llvm-cpu runfrom=model-run runupto=inference | |
Test list: ['onnx/operators/ReduceLogSum'] | |
Following tests will be run: ['onnx/operators/ReduceLogSum'] | |
Running: onnx/operators/ReduceLogSum [ Proc: 1776458 ] | |
Running classical flow for test onnx/operators/ReduceLogSum | |
Running classical flow model-run for test onnx/operators/ReduceLogSum | |
Running torch MLIR generation for onnx/operators/ReduceLogSum | |
Launching: python runmodel.py --todtype default --mode onnx --outfileprefix ReduceLogSum 1> model-run.log 2>&1 [ Proc: 1776458 ] | |
Launching: PYTHONPATH=/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir python -m torch_mlir.tools.import_onnx model.onnx -o ReduceLogSum.default.torch-onnx.mlir 1> onnx-import.log 2>&1 [ Proc: 1776458 ] | |
Launching: /home/aramalin/Documents/Nod/torch-mlir/build/bin/torch-mlir-opt -pass-pipeline='builtin.module(func.func(convert-torch-onnx-to-torch),torch-lower-to-backend-contract,func.func(cse,canonicalize),torch-backend-to-linalg-on-tensors-backend-pipeline)' ReduceLogSum.default.torch-onnx.mlir > ReduceLogSum.default.onnx.linalg.mlir 2>torch-mlir.log [ Proc: 1776458 ] | |
Running code generation for onnx/operators/ReduceLogSum | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu --iree-input-type=tm_tensor ReduceLogSum.default.onnx.linalg.mlir > ReduceLogSum.default.vmfb 2>iree-compile.log [ Proc: 1776458 ] | |
Running inference for onnx/operators/ReduceLogSum | |
Loaded: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.input.pt and /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.goldoutput.pt | |
input list length: 1, output list length: 1 | |
Creating: inference_input.0.bin | |
Created: inference_input.n.bin files | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-run-module --module=ReduceLogSum.default.vmfb --input="2x3x4xf32=@inference_input.0.bin" --output=@inference_output.0.bin > inference.log 2>&1 [ Proc: 1776458 ] | |
Out shape: torch.Size([1, 1, 1]) Dtype: torch.float32 Loading inference_output.0.bin | |
Test onnx/operators/ReduceLogSum failed [mismatch] | |
All tasks submitted to process pool completed | |
Completed run of e2e shark tests | |
***************ReduceLogSum second run**************** | |
(e2e) aramalin@aramalin-navi3:~/Documents/Nod/SHARK-TestSuite/e2eshark$ PYTHONPATH="/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir:$PYTHONPATH" python ./run.py -c /home/aramalin/Documents/Nod/torch-mlir/build -i /home/aramalin/Documents/Nod/iree-build --tests onnx/operators/ReduceLogSum --verbose --cachedir /tmp/ --verbose --torchtolinalg | |
Starting e2eshark tests. Using 4 processes | |
Cache Directory: /tmp | |
Test run with arguments: {'backend': 'llvm-cpu', 'todtype': 'default', 'frameworks': ['pytorch'], 'groups': ['operators', 'combinations'], 'ireebuild': '/home/aramalin/Documents/Nod/iree-build', 'jobs': 4, 'torchmlirbuild': '/home/aramalin/Documents/Nod/torch-mlir/build', 'torchtolinalg': True, 'mode': 'onnx', 'norun': False, 'postprocess': False, 'report': False, 'reportformat': 'pipe', 'runfrom': 'model-run', 'runupto': 'inference', 'rundirectory': 'test-run', 'skiptestsfile': None, 'uploadtestsfile': None, 'tests': ['onnx/operators/ReduceLogSum'], 'testsfile': None, 'tolerance': None, 'torchmlirimport': 'fximport', 'verbose': True, 'zerotolerance': False, 'cachedir': '/tmp/', 'cleanup': False} | |
Torch MLIR build: /home/aramalin/Documents/Nod/torch-mlir/build | |
IREE build: /home/aramalin/Documents/Nod/iree-build | |
Test run directory: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run | |
Since --tests or --testsfile was specified, --groups ignored | |
Framework:onnx mode=onnx backend=llvm-cpu runfrom=model-run runupto=inference | |
Test list: ['onnx/operators/ReduceLogSum'] | |
Following tests will be run: ['onnx/operators/ReduceLogSum'] | |
Running: onnx/operators/ReduceLogSum [ Proc: 1777358 ] | |
Running classical flow for test onnx/operators/ReduceLogSum | |
Running classical flow model-run for test onnx/operators/ReduceLogSum | |
Running torch MLIR generation for onnx/operators/ReduceLogSum | |
Launching: python runmodel.py --todtype default --mode onnx --outfileprefix ReduceLogSum 1> model-run.log 2>&1 [ Proc: 1777358 ] | |
Launching: PYTHONPATH=/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir python -m torch_mlir.tools.import_onnx model.onnx -o ReduceLogSum.default.torch-onnx.mlir 1> onnx-import.log 2>&1 [ Proc: 1777358 ] | |
Launching: /home/aramalin/Documents/Nod/torch-mlir/build/bin/torch-mlir-opt -pass-pipeline='builtin.module(func.func(convert-torch-onnx-to-torch),torch-lower-to-backend-contract,func.func(cse,canonicalize),torch-backend-to-linalg-on-tensors-backend-pipeline)' ReduceLogSum.default.torch-onnx.mlir > ReduceLogSum.default.onnx.linalg.mlir 2>torch-mlir.log [ Proc: 1777358 ] | |
Running code generation for onnx/operators/ReduceLogSum | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu --iree-input-type=tm_tensor ReduceLogSum.default.onnx.linalg.mlir > ReduceLogSum.default.vmfb 2>iree-compile.log [ Proc: 1777358 ] | |
Running inference for onnx/operators/ReduceLogSum | |
Loaded: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.input.pt and /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.goldoutput.pt | |
input list length: 1, output list length: 1 | |
Creating: inference_input.0.bin | |
Created: inference_input.n.bin files | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-run-module --module=ReduceLogSum.default.vmfb --input="2x3x4xf32=@inference_input.0.bin" --output=@inference_output.0.bin > inference.log 2>&1 [ Proc: 1777358 ] | |
Out shape: torch.Size([1, 1, 1]) Dtype: torch.float32 Loading inference_output.0.bin | |
Test onnx/operators/ReduceLogSum failed [mismatch] | |
All tasks submitted to process pool completed | |
Completed run of e2e shark tests | |
***************ReduceLogSum third run**************** | |
(e2e) aramalin@aramalin-navi3:~/Documents/Nod/SHARK-TestSuite/e2eshark$ PYTHONPATH="/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir:$PYTHONPATH" python ./run.py -c /home/aramalin/Documents/Nod/torch-mlir/build -i /home/aramalin/Documents/Nod/iree-build --tests onnx/operators/ReduceLogSum --verbose --cachedir /tmp/ --verbose --torchtolinalg | |
Starting e2eshark tests. Using 4 processes | |
Cache Directory: /tmp | |
Test run with arguments: {'backend': 'llvm-cpu', 'todtype': 'default', 'frameworks': ['pytorch'], 'groups': ['operators', 'combinations'], 'ireebuild': '/home/aramalin/Documents/Nod/iree-build', 'jobs': 4, 'torchmlirbuild': '/home/aramalin/Documents/Nod/torch-mlir/build', 'torchtolinalg': True, 'mode': 'onnx', 'norun': False, 'postprocess': False, 'report': False, 'reportformat': 'pipe', 'runfrom': 'model-run', 'runupto': 'inference', 'rundirectory': 'test-run', 'skiptestsfile': None, 'uploadtestsfile': None, 'tests': ['onnx/operators/ReduceLogSum'], 'testsfile': None, 'tolerance': None, 'torchmlirimport': 'fximport', 'verbose': True, 'zerotolerance': False, 'cachedir': '/tmp/', 'cleanup': False} | |
Torch MLIR build: /home/aramalin/Documents/Nod/torch-mlir/build | |
IREE build: /home/aramalin/Documents/Nod/iree-build | |
Test run directory: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run | |
Since --tests or --testsfile was specified, --groups ignored | |
Framework:onnx mode=onnx backend=llvm-cpu runfrom=model-run runupto=inference | |
Test list: ['onnx/operators/ReduceLogSum'] | |
Following tests will be run: ['onnx/operators/ReduceLogSum'] | |
Running: onnx/operators/ReduceLogSum [ Proc: 1777648 ] | |
Running classical flow for test onnx/operators/ReduceLogSum | |
Running classical flow model-run for test onnx/operators/ReduceLogSum | |
Running torch MLIR generation for onnx/operators/ReduceLogSum | |
Launching: python runmodel.py --todtype default --mode onnx --outfileprefix ReduceLogSum 1> model-run.log 2>&1 [ Proc: 1777648 ] | |
Launching: PYTHONPATH=/home/aramalin/Documents/Nod/torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir python -m torch_mlir.tools.import_onnx model.onnx -o ReduceLogSum.default.torch-onnx.mlir 1> onnx-import.log 2>&1 [ Proc: 1777648 ] | |
Launching: /home/aramalin/Documents/Nod/torch-mlir/build/bin/torch-mlir-opt -pass-pipeline='builtin.module(func.func(convert-torch-onnx-to-torch),torch-lower-to-backend-contract,func.func(cse,canonicalize),torch-backend-to-linalg-on-tensors-backend-pipeline)' ReduceLogSum.default.torch-onnx.mlir > ReduceLogSum.default.onnx.linalg.mlir 2>torch-mlir.log [ Proc: 1777648 ] | |
Running code generation for onnx/operators/ReduceLogSum | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-compile --iree-input-demote-i64-to-i32 --iree-hal-target-backends=llvm-cpu --iree-input-type=tm_tensor ReduceLogSum.default.onnx.linalg.mlir > ReduceLogSum.default.vmfb 2>iree-compile.log [ Proc: 1777648 ] | |
Running inference for onnx/operators/ReduceLogSum | |
Loaded: /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.input.pt and /home/aramalin/Documents/Nod/SHARK-TestSuite/e2eshark/test-run/onnx/operators/ReduceLogSum/ReduceLogSum.default.goldoutput.pt | |
input list length: 1, output list length: 1 | |
Creating: inference_input.0.bin | |
Created: inference_input.n.bin files | |
Launching: /home/aramalin/Documents/Nod/iree-build/tools/iree-run-module --module=ReduceLogSum.default.vmfb --input="2x3x4xf32=@inference_input.0.bin" --output=@inference_output.0.bin > inference.log 2>&1 [ Proc: 1777648 ] | |
Out shape: torch.Size([1, 1, 1]) Dtype: torch.float32 Loading inference_output.0.bin | |
Test onnx/operators/ReduceLogSum passed | |
All tasks submitted to process pool completed |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
torch-mlir.log: