Created
October 21, 2023 06:30
-
-
Save VRehnberg/a9750bf92d4ea413db8915fc61e7a8ef to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /dev/shm/eb-8rkx7t__/files_pr18807/p/PyTorch/PyTorch-2.0.1-foss-2022b-CUDA-12.0.0.eb (PR(s) #18807)
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
test_index_reduce_reduce_prod_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_reduce_reduce_prod_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_index_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_invalid_shapes_grid_sampler_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_is_set_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_is_signed_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_item_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_large_cumprod_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_large_cumsum_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_log_normal_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_log_normal_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_logcumsumexp_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_lognormal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bfloat16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bfloat16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bool_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_bool_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex128_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex128_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_complex64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float32_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_float64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int16_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int32_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int64_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int8_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_int8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_uint8_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_cpu_uint8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_fill_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_scatter_large_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_masked_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_masked_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3707: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1855.) | |
torch.masked_select(src, mask, out=dst3) | |
ok | |
test_masked_select_discontiguous_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_clone_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_consistency_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_cpu_and_cuda_ops_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_memory_format_empty_like_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_factory_like_functions_preserve_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_operators_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_preserved_after_permute_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_propagation_rules_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_type_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_memory_format_type_shortcuts_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_module_share_memory_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_cpu_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_deterministic_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_deterministic_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_deterministic_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_multinomial_empty_w_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_empty_wo_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_multinomial_gpu_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected' | |
test_multinomial_rng_state_advance_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test' | |
test_narrow_copy_non_contiguous_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_narrow_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveAvgPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveAvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AdaptiveMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_AvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_CTCLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_EmbeddingBag_max_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_FractionalMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_FractionalMaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool1d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool2d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU' | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_MaxUnpool3d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_NLLLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReflectionPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_ReplicationPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_bincount_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_cumsum_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_grid_sample_2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_grid_sample_3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_histc_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_bicubic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_bilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_linear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_interpolate_trilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_kthvalue_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_median_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:1696: UserWarning: An output with one or more elements was resized since it had shape [10], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/Resize.cpp:26.) | |
torch.median(a, 0, out=(result, indices)) | |
ok | |
test_nondeterministic_alert_put_accumulate_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nondeterministic_alert_put_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_normal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_nullary_op_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pairwise_distance_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pdist_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pdist_norm_large_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_pickle_gradscaler_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_pin_memory_from_constructor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_put_accumulate_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_accumulate_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_put_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_repeat_interleave_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scalar_check_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_non_unique_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_one_dim_deterministic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_add_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_scatter_reduce_non_unique_index_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test_torch.py:3540: UserWarning: The reduce argument of torch.scatter with Tensor src is deprecated and will be removed in a future PyTorch release. Use torch.scatter_reduce instead for more reduction options. (Triggered internally at /dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:224.) | |
input.scatter_(0, index, src, reduce=operation) | |
ok | |
test_scatter_reduce_non_unique_index_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_non_unique_index_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_operations_to_large_input_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_reduce_scalar_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_scatter_zero_size_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_serialization_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_set_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_set_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_shift_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_skip_xla_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_all_devices_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_errors_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_meta_from_tensor_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_qint32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_qint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_quint4x2 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_quint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_storage_setitem_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_strides_propagation_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_sync_warning_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda' | |
test_take_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_take_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_from_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_set_errors_multigpu_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected' | |
test_tensor_shape_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_tensor_storage_type_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_ternary_op_mem_overlap_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_typed_storage_meta_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_unfold_all_devices_and_dtypes_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_unfold_scalars_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_uniform_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_untyped_storage_meta_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_warn_always_caught_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_where_scalar_handcrafted_values_cpu (__main__.TestTorchDeviceTypeCPU) ... ok | |
test_cuda_vitals_gpu_only_cpu (__main__.TestVitalSignsCudaCPU) ... skipped 'Only runs on cuda' | |
---------------------------------------------------------------------- | |
Ran 840 tests in 16.018s | |
OK (skipped=40) | |
[TORCH_VITAL] Dataloader.enabled True | |
[TORCH_VITAL] Dataloader.basic_unit_test TEST_VALUE_STRING | |
[TORCH_VITAL] CUDA.used true | |
##[endgroup] | |
FINISHED PRINTING LOG FILE of test_torch (/dev/shm/PyTorch/2.0.1/foss-2022b-CUDA-12.0.0/pytorch-v2.0.1/test/test-reports/test_torch_sri46wz0.log) | |
inductor/test_smoke failed! | |
test_jit failed! | |
test_sparse failed! | |
test_quantization failed! | |
test_unary_ufuncs failed! | |
distributed/_composable/test_replicate failed! | |
distributed/_tensor/test_dtensor_ops failed! | |
distributed/optim/test_zero_redundancy_optimizer failed! | |
distributed/test_c10d_gloo failed! | |
distributed/test_c10d_nccl failed! | |
distributed/test_c10d_pypg failed! | |
nn/test_convolution failed! | |
test_cpp_extensions_aot_ninja failed! | |
test_cpp_extensions_aot_no_ninja failed! | |
test_cuda_primary_ctx failed! | |
test_jit_legacy failed! | |
test_jit_profiling failed! | |
test_nn failed! | |
test_ops_gradients failed! | |
test_sparse_csr failed! | |
== 2023-10-21 02:56:33,449 filetools.py:383 INFO Path /dev/shm/eb-8rkx7t__/tmpbaeiz2xf successfully removed. | |
== 2023-10-21 02:56:34,606 pytorch.py:303 WARNING Found 20 individual tests that exited with an error: test_compile_decorator, test_mlp, test_nccl_warn_not_in_group_debug_detail, test_python_submodule_script, test_python_submodule_script, test_python_submodule_script, test_qlinear_with_input_q_dq_qweight_dq_output_fp32, test_replicate_with_kwargs, test_shared_module, test_shared_module, test_shared_module, test_tensor_sharing, test_tensor_sharing, test_tensor_sharing, test_tensor_sharing_with_forward, test_tensor_sharing_with_forward, test_tensor_sharing_with_forward, test_traced_module, test_traced_module, test_traced_module | |
Found 38 individual tests with failed assertions: test_Conv1d_pad_same_cuda_tf32, test_Conv2d_groups_nobias_v2, test_Conv2d_no_bias_cuda_tf32, test_consistency_SparseBSC_sgn_cpu_uint8, test_consistency_SparseBSC_sign_cpu_uint8, test_consistency_SparseBSR_sgn_cpu_uint8, test_consistency_SparseBSR_sign_cpu_uint8, test_consistency_SparseCSC_sgn_cpu_uint8, test_consistency_SparseCSC_sign_cpu_uint8, test_consistency_SparseCSR_sgn_cpu_uint8, test_consistency_SparseCSR_sign_cpu_uint8, test_contig_vs_every_other_sgn_cpu_uint8, test_contig_vs_every_other_sign_cpu_uint8, test_copy, test_ddp_with_pypg_with_grad_views, test_ddp_with_pypg_with_grad_views, test_fn_grad_linalg_det_singular_cpu_float64, test_gloo_backend_cpu_module, test_non_contig_sgn_cpu_uint8, test_non_contig_sign_cpu_uint8, test_pin_memory, test_reference_numerics_normal_sgn_cpu_uint8, test_reference_numerics_normal_sign_cpu_uint8, test_reference_numerics_small_sgn_cpu_uint8, test_reference_numerics_small_sign_cpu_uint8, test_replicate_multi_module, test_replicate_single_module, test_script_get_device_cuda, test_script_get_device_cuda, test_script_get_device_cuda, test_sigmoid_non_observed, test_sparse_consistency_sgn_cpu_uint8, test_sparse_consistency_sign_cpu_uint8, test_sparse_gradients_grad_is_view, test_str_repr, test_streams_and_events, test_streams_and_events, test_streams_and_events | |
== 2023-10-21 02:56:37,431 build_log.py:171 ERROR EasyBuild crashed with an error (at easybuild/base/exceptions.py:126 in __init__): 34 test failures, 19 test errors (out of 130104): | |
Failed tests (suites/files): | |
* distributed/_composable/test_replicate | |
* distributed/_tensor/test_dtensor_ops | |
* distributed/optim/test_zero_redundancy_optimizer | |
* distributed/test_c10d_gloo | |
* distributed/test_c10d_nccl | |
* distributed/test_c10d_pypg | |
* inductor/test_smoke | |
* nn/test_convolution | |
* test_cpp_extensions_aot_ninja | |
* test_cpp_extensions_aot_no_ninja | |
* test_cuda_primary_ctx | |
* test_jit | |
* test_jit_legacy | |
* test_jit_profiling | |
* test_nn | |
* test_ops_gradients | |
* test_quantization | |
* test_sparse | |
* test_sparse_csr | |
* test_unary_ufuncs | |
inductor/test_smoke (3 total tests, errors=2) | |
test_jit (2676 total tests, failures=2, errors=5, skipped=116, expected failures=10) | |
test_sparse (2618 total tests, failures=2, skipped=325) | |
test_quantization (997 total tests, failures=1, errors=1, skipped=44) | |
test_unary_ufuncs (12664 total tests, failures=8, skipped=644, expected failures=14) | |
distributed/_composable/test_replicate (5 total tests, failures=2, errors=1) | |
distributed/_tensor/test_dtensor_ops (637 total tests, skipped=36, expected failures=407, unexpected successes=1) | |
distributed/optim/test_zero_redundancy_optimizer (42 total tests, failures=1) | |
distributed/test_c10d_pypg (42 total tests, failures=2) | |
nn/test_convolution (584 total tests, failures=1, skipped=188, expected failures=25) | |
test_jit_legacy (2676 total tests, failures=2, errors=5, skipped=114, expected failures=10) | |
test_jit_profiling (2676 total tests, failures=2, errors=5, skipped=116, expected failures=10) | |
test_nn (2601 total tests, failures=2, skipped=106, expected failures=3) | |
test_sparse_csr (4357 total tests, failures=8, skipped=716) | |
test_ops_gradients (1 failed, 1892 passed, 3055 skipped, 42 xfailed, 1 warning, 2 rerun) | |
(at easybuild/easyblocks/p/pytorch.py:403 in test_step) | |
== 2023-10-21 02:56:37,432 build_log.py:267 INFO ... (took 8 hours 44 mins 9 secs) | |
== 2023-10-21 02:56:37,432 filetools.py:2012 INFO Removing lock /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b-CUDA-12.0.0.lock... | |
== 2023-10-21 02:56:37,433 filetools.py:383 INFO Path /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b-CUDA-12.0.0.lock successfully removed. | |
== 2023-10-21 02:56:37,433 filetools.py:2016 INFO Lock removed: /apps/Arch/software/.locks/_apps_Arch_software_PyTorch_2.0.1-foss-2022b-CUDA-12.0.0.lock | |
== 2023-10-21 02:56:37,433 easyblock.py:4277 WARNING build failed (first 300 chars): 34 test failures, 19 test errors (out of 130104): | |
Failed tests (suites/files): | |
* distributed/_composable/test_replicate | |
* distributed/_tensor/test_dtensor_ops | |
* distributed/optim/test_zero_redundancy_optimizer | |
* distributed/test_c10d_gloo | |
* distributed/test_c10d_nccl | |
* distributed/test_c10d_pypg | |
* i | |
== 2023-10-21 02:56:37,434 easyblock.py:328 INFO Closing log for application name PyTorch version 2.0.1 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment