Skip to content

Instantly share code, notes, and snippets.

@jammm
Last active July 26, 2025 16:08
Show Gist options
  • Select an option

  • Save jammm/543a12eb04c925f07d8dfac5a28e900e to your computer and use it in GitHub Desktop.

Select an option

Save jammm/543a12eb04c925f07d8dfac5a28e900e to your computer and use it in GitHub Desktop.
hipblaslt gfx1200 failures
hipBLASLt version: 100000
hipBLASLt git version: faee7ce8fe
Query device success: there are 1 devices. (Target device ID is 0)
Device ID 0 : AMD Radeon Graphics gfx1200
with 17.1 GB memory, max. SCLK 2740 MHz, max. MCLK 1258 MHz, compute capability 12.0
maxGridDimX 2147483647, sharedMemPerBlock 65.5 KB, maxThreadsPerBlock 1024, warpSize 32
info: parsing of test data may take a couple minutes before any test output appears...
...
[----------] Global test environment tear-down
[==========] 39066 tests from 11 test suites ran. (504500 ms total)
[ PASSED ] 39034 tests.
[ FAILED ] 32 tests, listed below:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_NN_1_1024_512_0_1_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_NN_1_1024_512_0_1_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_NN_1_1024_512_0_1_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_NN_1_1024_512_0_1_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_NN_1024_1_512_0_1024_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_NN_1024_1_512_0_1024_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_NN_1024_1_512_0_1024_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_NN_1024_1_512_0_1024_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_NT_1_1024_512_0_1_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_NT_1_1024_512_0_1_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_NT_1_1024_512_0_1_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_NT_1_1024_512_0_1_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_NT_1024_1_512_0_1024_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_NT_1024_1_512_0_1024_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_NT_1024_1_512_0_1024_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_NT_1024_1_512_0_1024_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_TN_1_1024_512_0_512_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_TN_1_1024_512_0_512_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_TN_1_1024_512_0_512_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_TN_1_1024_512_0_512_512_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_TN_1024_1_512_0_512_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_TN_1024_1_512_0_512_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_TN_1024_1_512_0_512_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_TN_1024_1_512_0_512_512_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_TT_1_1024_512_0_512_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_TT_1_1024_512_0_512_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_TT_1_1024_512_0_512_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_TT_1_1024_512_0_512_1024_0_1_1_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rf8_rf8_rf8_rf32_r_TT_1024_1_512_0_512_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rbf8_rf8_rf8_rf32_r_TT_1024_1_512_0_512_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_f8_rbf8_rf8_rf8_rf32_r_TT_1024_1_512_0_512_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: f8_r, b_type: bf8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
[ FAILED ] _/matmul_test.matmul/quick_matmul_one_real_precisions_1b_gfx12_bf8_rf8_rf8_rf8_rf32_r_TT_1024_1_512_0_512_1_0_1024_1024_1, where GetParam() = { function: "matmul", name: "matmul_one_real_precisions_1b_gfx12", category: "quick", known_bug_platforms: "", alpha: 0, beta: 0, stride_a: 0x615ec0af36f8, stride_b: 0x615ec0af37f8, stride_c: 0x615ec0af38f8, stride_d: 0x615ec0af39f8, stride_e: 0x615ec0af3af8, user_allocated_workspace: 134217728, M: 0x615ec0af3c00, N: 0x615ec0af3d00, K: 0x615ec0af3e00, lda: 0x615ec0af3f00, ldb: 0x615ec0af4000, ldc: 0x615ec0af4100, ldd: 0x615ec0af4200, lde: 0x615ec0af4300, batch_count: 1, iters: 10, cold_iters: 2, algo: 0, solution_index: -1, requested_solution_num: 1, a_type: bf8_r, b_type: f8_r, c_type: f8_r, d_type: f8_r, compute_type: f32_r, compute_input_typeA: non-supported type, compute_input_typeB: non-supported type, scale_type: f32_r, initialization: "rand_int", gpu_arch: "120[0-1]", pad: 4096, grouped_gemm: 0, threads: 0, streams: 0, devices:
32 FAILED TESTS
hipBLASLt version: 100000
hipBLASLt git version: faee7ce8fe
command line: hipblaslt-test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment