Skip to content

Instantly share code, notes, and snippets.

@antiagainst
Last active November 19, 2021 15:47
Show Gist options
  • Save antiagainst/58875b3ea7243170bb30e8a71056d80e to your computer and use it in GitHub Desktop.
Save antiagainst/58875b3ea7243170bb30e8a71056d80e to your computer and use it in GitHub Desktop.

Full Benchmark Summary

Similar Benchmarks

Benchmark Name Average Latency (ms) Median Latency (ms) Latency Standard Deviation (ms)
MobileNetV2 [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 12 (vs. 13, 7.69%↓) 12 0
DeepLabV3 [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 16 (vs. 15, 6.67%↑) 16 0
DeepLabV3 [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 37 (vs. 35, 5.71%↑) 37 2
PoseNet [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 37 (vs. 39, 5.13%↓) 37 3
MobileNetV2 [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 33 (vs. 32, 3.12%↑) 33 0
MobileBertSquad [fp32] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 239 (vs. 237, 0.84%↑) 239 3
MobileBertSquad [fp16] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 138 (vs. 137, 0.73%↑) 138 1
MobileBertSquad [fp16] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 239 (vs. 238, 0.42%↑) 239 1
PoseNet [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 38 (vs. 38, 0.00%) 38 1
PoseNet [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 20 (vs. 20, 0.00%) 20 1
MobileSSD [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 95 (vs. 95, 0.00%) 96 3
MobileSSD [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 94 (vs. 94, 0.00%) 94 3
MobileSSD [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 52 (vs. 52, 0.00%) 52 1
DeepLabV3 [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 35 (vs. 35, 0.00%) 35 1
MobileNetV3Small [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 25 (vs. 25, 0.00%) 25 0
MobileNetV3Small [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 9 (vs. 9, 0.00%) 9 0

Full Benchmark Summary

Similar Benchmarks

Benchmark Name Average Latency (ms) Median Latency (ms) Latency Standard Deviation (ms)
DeepLabV3 [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 16 (vs. 15, 6.67%↑) 16 0
DeepLabV3 [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 36 (vs. 35, 2.86%↑) 36 1
PoseNet [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 39 (vs. 38, 2.63%↑) 39 1
PoseNet [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 38 (vs. 39, 2.56%↓) 38 1
MobileBertSquad [fp16] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 234 (vs. 238, 1.68%↓) 236 5
MobileBertSquad [fp16] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 139 (vs. 137, 1.46%↑) 139 2
MobileSSD [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 93 (vs. 94, 1.06%↓) 93 2
MobileSSD [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 94 (vs. 95, 1.05%↓) 94 3
PoseNet [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 20 (vs. 20, 0.00%) 21 0
MobileSSD [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 52 (vs. 52, 0.00%) 52 1
DeepLabV3 [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 35 (vs. 35, 0.00%) 35 1
MobileBertSquad [fp32] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 237 (vs. 237, 0.00%) 237 3
MobileNetV3Small [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 25 (vs. 25, 0.00%) 25 0
MobileNetV3Small [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 9 (vs. 9, 0.00%) 9 0
MobileNetV2 [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 32 (vs. 32, 0.00%) 32 1
MobileNetV2 [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 13 (vs. 13, 0.00%) 13 0

Full Benchmark Summary

Similar Benchmarks

Benchmark Name Average Latency (ms) Median Latency (ms) Latency Standard Deviation (ms)
MobileNetV2 [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 12 (vs. 13, 7.69%↓) 12 0
PoseNet [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 21 (vs. 20, 5.00%↑) 21 1
MobileBertSquad [fp32] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 228 (vs. 237, 3.80%↓) 228 2
MobileBertSquad [fp16] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 231 (vs. 238, 2.94%↓) 232 1
DeepLabV3 [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 36 (vs. 35, 2.86%↑) 36 1
PoseNet [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 39 (vs. 38, 2.63%↑) 39 1
PoseNet [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 38 (vs. 39, 2.56%↓) 38 1
MobileSSD [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 53 (vs. 52, 1.92%↑) 52 1
MobileSSD [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 95 (vs. 94, 1.06%↑) 96 2
MobileSSD [fp32] (TFLite) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 95 (vs. 95, 0.00%) 96 4
DeepLabV3 [fp32] (TFLite) full-inference,default-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 35 (vs. 35, 0.00%) 35 1
DeepLabV3 [fp32] (TFLite) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 15 (vs. 15, 0.00%) 15 0
MobileNetV3Small [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 25 (vs. 25, 0.00%) 25 0
MobileNetV3Small [fp32,imagenet] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 9 (vs. 9, 0.00%) 9 0
MobileBertSquad [fp16] (TensorFlow) kernel-execution,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 137 (vs. 137, 0.00%) 137 1
MobileNetV2 [fp32,imagenet] (TensorFlow) full-inference,experimental-flags with IREE-Vulkan @ Pixel-6 (GPU-Mali-G78) 32 (vs. 32, 0.00%) 32 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment