Created
October 21, 2023 06:30
-
-
Save VRehnberg/c45488578f93038726c8efd3c375a935 to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /dev/shm/eb-8rkx7t__/files_pr18807/p/PyTorch/PyTorch-2.0.1-foss-2022b.eb (PR(s) #18807)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
test_index_copy_scalars_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_fill_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_put_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_put_non_accumulate_deterministic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3063: UserWarning: index_reduce() is in beta and the API may change at any time. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1110.) | |
dest.index_reduce_(dim, idx, src, reduce, include_self=include_self) | |
ok | |
test_index_reduce_reduce_amax_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amax_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_amin_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_mean_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_invalid_shapes_grid_sampler_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_is_set_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_is_signed_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_large_cumprod_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_large_cumsum_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_log_normal_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_logcumsumexp_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bfloat16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bfloat16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bool_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bool_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex128_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex128_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float32_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int32_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int8_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_uint8_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_uint8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_large_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_masked_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_discontiguous_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_clone_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_consistency_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_cpu_and_cuda_ops_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_memory_format_empty_like_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_factory_like_functions_preserve_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_operators_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_preserved_after_permute_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_propagation_rules_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_type_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_type_shortcuts_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_module_share_memory_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_cpu_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_deterministic_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_deterministic_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_deterministic_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_empty_w_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_empty_wo_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_gpu_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected' | |
test_multinomial_rng_state_advance_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test' | |
test_narrow_copy_non_contiguous_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_narrow_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveAvgPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveAvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_CTCLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_EmbeddingBag_max_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_FractionalMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_FractionalMaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_NLLLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_bincount_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_grid_sample_2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_grid_sample_3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_histc_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_bicubic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_bilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_linear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_trilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_kthvalue_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_median_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:1696: UserWarning: An output with one or more elements was resized since it had shape [10], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/Resize.cpp:26.) | |
torch.median(a, 0, out=(result, indices)) | |
ok | |
test_nondeterministic_alert_put_accumulate_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_put_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nullary_op_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pairwise_distance_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pdist_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pdist_norm_large_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_pickle_gradscaler_cpu (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/eb-8rkx7t__/tmpqxapmru7/lib/python3.10/site-packages/torch/cuda/amp/grad_scaler.py:120: UserWarning: torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling. | |
warnings.warn("torch.cuda.amp.GradScaler is enabled, but CUDA is not available. Disabling.") | |
ok | |
test_pin_memory_from_constructor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_put_accumulate_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_repeat_interleave_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scalar_check_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_non_unique_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_one_dim_deterministic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_scatter_reduce_non_unique_index_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test_torch.py:3540: UserWarning: The reduce argument of torch.scatter with Tensor src is deprecated and will be removed in a future PyTorch release. Use torch.scatter_reduce instead for more reduction options. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:224.) | |
input.scatter_(0, index, src, reduce=operation) | |
ok | |
test_scatter_reduce_non_unique_index_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_zero_size_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_serialization_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_set_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_shift_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_skip_xla_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_all_devices_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_qint32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_qint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_quint4x2 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_quint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_strides_propagation_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_sync_warning_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_take_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_set_errors_multigpu_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected' | |
test_tensor_shape_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_ternary_op_mem_overlap_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_unfold_all_devices_and_dtypes_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_unfold_scalars_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_untyped_storage_meta_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_warn_always_caught_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_where_scalar_handcrafted_values_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_cuda_vitals_gpu_only_cpu (__main__.TestVitalSignsCudaCPU) ... skipped 'Only runs on cuda' | |
---------------------------------------------------------------------- | |
Ran 840 tests in 11.088s | |
OK (skipped=39) | |
[TORCH_VITAL] Dataloader.enabled True | |
[TORCH_VITAL] Dataloader.basic_unit_test TEST_VALUE_STRING | |
[TORCH_VITAL] CUDA.used False | |
##[endgroup] | |
FINISHED PRINTING LOG FILE of test_torch (/dev/shm/PyTorch/2.0.1/foss-2022b/pytorch-v2.0.1/test/test-reports/test_torch_cjlh5_t7.log) | |
test_quantization failed! | |
test_sparse failed! | |
test_unary_ufuncs failed! | |
distributed/_tensor/test_dtensor_ops failed! | |
test_ops_gradients failed! | |
test_sparse_csr failed! | |
== 2023-10-21 08:30:40,759 filetools.py:383 INFO Path /dev/shm/eb-8rkx7t__/tmpqxapmru7 successfully removed. | |
== 2023-10-21 08:30:41,793 pytorch.py:303 WARNING Found 1 individual tests that exited with an error: test_qlinear_with_input_q_dq_qweight_dq_output_fp32 | |
Found 21 individual tests with failed assertions: test_consistency_SparseBSC_sgn_cpu_uint8, test_consistency_SparseBSC_sign_cpu_uint8, test_consistency_SparseBSR_sgn_cpu_uint8, test_consistency_SparseBSR_sign_cpu_uint8, test_consistency_SparseCSC_sgn_cpu_uint8, test_consistency_SparseCSC_sign_cpu_uint8, test_consistency_SparseCSR_sgn_cpu_uint8, test_consistency_SparseCSR_sign_cpu_uint8, test_contig_vs_every_other_sgn_cpu_uint8, test_contig_vs_every_other_sign_cpu_uint8, test_fn_grad_linalg_det_singular_cpu_float64, test_non_contig_sgn_cpu_uint8, test_non_contig_sign_cpu_uint8, test_reference_numerics_normal_sgn_cpu_uint8, test_reference_numerics_normal_sign_cpu_uint8, test_reference_numerics_small_sgn_cpu_uint8, test_reference_numerics_small_sign_cpu_uint8, test_sigmoid, test_sigmoid_non_observed, test_sparse_consistency_sgn_cpu_uint8, test_sparse_consistency_sign_cpu_uint8 | |
== 2023-10-21 08:30:44,280 pytorch.py:417 WARNING 21 test failures, 1 test error (out of 129472): | |
test_quantization (997 total tests, failures=2, errors=1, skipped=75) | |
test_sparse (2618 total tests, failures=2, skipped=329) | |
test_unary_ufuncs (12664 total tests, failures=8, skipped=644, expected failures=14) | |
distributed/_tensor/test_dtensor_ops (637 total tests, skipped=36, expected failures=407, unexpected successes=1) | |
test_sparse_csr (4357 total tests, failures=8, skipped=716) | |
test_ops_gradients (1 failed, 1892 passed, 3055 skipped, 42 xfailed, 1 warning, 2 rerun) | |
The PyTorch test suite is known to include some flaky tests, which may fail depending on the specifics of the system or the context in which they are run. For this PyTorch installation, EasyBuild allows up to 2 tests to fail. We recommend to double check that the failing tests listed above are known to be flaky, or do not affect your intended usage of PyTorch. In case of doubt, reach out to the EasyBuild community (via GitHub, Slack, or mailing list). | |
== 2023-10-21 08:30:44,282 build_log.py:171 ERROR EasyBuild crashed with an error (at easybuild/base/exceptions.py:126 in __init__): Too many failed tests (22), maximum allowed is 2 (at easybuild/easyblocks/p/pytorch.py:421 in test_step) | |
== 2023-10-21 08:30:44,283 build_log.py:267 INFO ... (took 5 hours 25 mins 38 secs) | |
== 2023-10-21 08:30:44,283 filetools.py:2012 INFO Removing lock /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b.lock... | |
== 2023-10-21 08:30:44,284 filetools.py:383 INFO Path /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b.lock successfully removed. | |
== 2023-10-21 08:30:44,284 filetools.py:2016 INFO Lock removed: /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b.lock | |
== 2023-10-21 08:30:44,284 easyblock.py:4277 WARNING build failed (first 300 chars): Too many failed tests (22), maximum allowed is 2 | |
== 2023-10-21 08:30:44,284 easyblock.py:328 INFO Closing log for application name PyTorch version 2.0.1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment