Skip to content

Instantly share code, notes, and snippets.

@rgov
Last active August 17, 2022 14:36
Show Gist options
  • Save rgov/7f1b6ce7fe1b5f746fc31b108fc38741 to your computer and use it in GitHub Desktop.
Save rgov/7f1b6ce7fe1b5f746fc31b108fc38741 to your computer and use it in GitHub Desktop.
ONNX Model Zoo to TensorFlow Lite conversion status

Report generated at 2022-08-17T07:09:44Z.

Environment

Package Version
Platform macOS-12.5-arm64-arm-64bit
Python 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)]
onnx 1.12.0
onnx-tf 1.10.0
tensorflow 2.9.0

Summary

Value Count
Models 39
Total 168
✔️ Passed 123
⚠️ Limitation 9
❌ Failed 36
➖ Skipped 0

Details

1. bert-squad

text/machine_comprehension/bert-squad

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. bertsquad-10 385M 5 10 🆗 🆗 tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} 🆗
2. bertsquad-12-int8 101M 7 12 🆗 ValueError: 'onnx_tf_prefix_bert/embeddings/MatMul:0_output_quantized_cast' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\/>-]*$
✔️ 3. bertsquad-12 384M 7 12 🆗 🆗 tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} 🆗
✔️ 4. bertsquad-8 385M 5 8 🆗 🆗 tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} 🆗

2. bidirectional_attention_flow

text/machine_comprehension/bidirectional_attention_flow

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
⚠️ 1. bidaf-9 37M 4 9 🆗 BackendIsNotSupposedToImplementIt: CategoryMapper is not implemented.

3. gpt-2

text/machine_comprehension/gpt-2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. gpt2-10 442M 6 10 🆗 🆗 tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} 🆗
✔️ 2. gpt2-lm-head-10 578M 6 10 🆗 🆗 tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} 🆗

4. gpt2-bs

text/machine_comprehension/gpt2-bs

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. gpt2-lm-head-bs-12 634M 7 12 🆗 TypeError: Expected any non-tensor type, but got a tensor instead.

5. roberta

text/machine_comprehension/roberta

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. roberta-base-11 278M 6 11 🆗 🆗 tf.Erf(tensor<?x?x3072xf32>) -> (tensor<?x?x3072xf32>) : {device = ""} 🆗
✔️ 2. roberta-sequence-classification-9 411M 6 9 🆗 🆗 tf.Cast(tensor<1xi64>) -> (tensor<1xf64>) : {Truncate = false, device = ""} 🆗

6. t5

text/machine_comprehension/t5

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. t5-decoder-with-lm-head-12 275M 6 12 🆗 🆗 tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} 🆗
✔️ 2. t5-encoder-12 186M 6 12 🆗 🆗 tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} 🆗

7. age_gender

vision/body_analysis/age_gender

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. age_googlenet 23M 6 11 🆗 🆗 🆗 🆗
✔️ 2. gender_googlenet 23M 6 11 🆗 🆗 🆗 🆗
✔️ 3. vgg_ilsvrc_16_age_chalearn_iccv2015 514M 6 11 🆗 🆗 🆗 🆗
✔️ 4. vgg_ilsvrc_16_age_imdb_wiki 514M 6 11 🆗 🆗 🆗 🆗
✔️ 5. vgg_ilsvrc_16_gender_imdb_wiki 512M 6 11 🆗 🆗 🆗 🆗

8. arcface

vision/body_analysis/arcface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. arcfaceresnet100-8 226M 3 8 🆗 🆗 🆗 🆗

9. emotion_ferplus

vision/body_analysis/emotion_ferplus

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. emotion-ferplus-2 31M 3 2 🆗 🆗 🆗 🆗
✔️ 2. emotion-ferplus-7 31M 3 7 🆗 🆗 🆗 🆗
✔️ 3. emotion-ferplus-8 31M 3 8 🆗 🆗 🆗 🆗

10. ultraface

vision/body_analysis/ultraface

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. version-RFB-320 1M 4 9 🆗 🆗 🆗 🆗
✔️ 2. version-RFB-640 2M 4 9 🆗 🆗 🆗 🆗

11. alexnet

vision/classification/alexnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. bvlcalexnet-12-int8 39M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. bvlcalexnet-12 216M 7 12 🆗 🆗 🆗 🆗
✔️ 3. bvlcalexnet-3 219M 3 3 🆗 🆗 🆗 🆗
✔️ 4. bvlcalexnet-6 219M 3 6 🆗 🆗 🆗 🆗
✔️ 5. bvlcalexnet-7 216M 3 7 🆗 🆗 🆗 🆗
✔️ 6. bvlcalexnet-8 216M 3 8 🆗 🆗 🆗 🆗
✔️ 7. bvlcalexnet-9 216M 3 9 🆗 🆗 🆗 🆗

12. caffenet

vision/classification/caffenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. caffenet-12-int8 39M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. caffenet-12 216M 7 12 🆗 🆗 🆗 🆗
✔️ 3. caffenet-3 219M 3 3 🆗 🆗 🆗 🆗
✔️ 4. caffenet-6 219M 3 6 🆗 🆗 🆗 🆗
✔️ 5. caffenet-7 216M 3 7 🆗 🆗 🆗 🆗
✔️ 6. caffenet-8 216M 3 8 🆗 🆗 🆗 🆗
✔️ 7. caffenet-9 216M 3 9 🆗 🆗 🆗 🆗

13. densenet-121

vision/classification/densenet-121

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
⚠️ 1. densenet-12-int8 6M 7 12 🆗 BackendIsNotSupposedToImplementIt: QLinearMul is not implemented.
✔️ 2. densenet-12 29M 7 12 🆗 🆗 🆗 🆗
✔️ 3. densenet-3 32M 3 3 🆗 🆗 🆗 🆗
✔️ 4. densenet-6 29M 3 9 🆗 🆗 🆗 🆗
✔️ 5. densenet-7 29M 3 7 🆗 🆗 🆗 🆗
✔️ 6. densenet-8 29M 3 8 🆗 🆗 🆗 🆗
✔️ 7. densenet-9 29M 3 9 🆗 🆗 🆗 🆗

14. efficientnet-lite4

vision/classification/efficientnet-lite4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. efficientnet-lite4-11-int8 12M 6 11 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. efficientnet-lite4-11 46M 6 11 🆗 🆗 🆗 🆗

15. googlenet

vision/classification/inception_and_googlenet/googlenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. googlenet-12-int8 5M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. googlenet-12 25M 7 12 🆗 🆗 🆗 🆗
✔️ 3. googlenet-3 25M 3 7 🆗 🆗 🆗 🆗
✔️ 4. googlenet-6 25M 3 9 🆗 🆗 🆗 🆗
✔️ 5. googlenet-7 25M 3 7 🆗 🆗 🆗 🆗
✔️ 6. googlenet-8 25M 3 8 🆗 🆗 🆗 🆗
✔️ 7. googlenet-9 25M 3 9 🆗 🆗 🆗 🆗

16. inception_v1

vision/classification/inception_and_googlenet/inception_v1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. inception-v1-12-int8 9M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
2. inception-v1-12 25M 7 12 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_0"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_0"}
3. inception-v1-3 28M 3 3 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_2"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_2"}
4. inception-v1-6 28M 3 6 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_4"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_4"}
5. inception-v1-7 25M 3 7 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_6"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_6"}
6. inception-v1-8 25M 3 8 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_8"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_8"}
7. inception-v1-9 25M 3 9 🆗 🆗 tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_10"} tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_10"}

17. inception_v2

vision/classification/inception_and_googlenet/inception_v2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. inception-v2-3 43M 3 3 🆗 🆗 🆗 🆗
✔️ 2. inception-v2-6 43M 3 6 🆗 🆗 🆗 🆗
✔️ 3. inception-v2-7 40M 3 7 🆗 🆗 🆗 🆗
✔️ 4. inception-v2-8 40M 3 8 🆗 🆗 🆗 🆗
✔️ 5. inception-v2-9 40M 3 9 🆗 🆗 🆗 🆗

18. mnist

vision/classification/mnist

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. mnist-1 26K 3 1 🆗 🆗 🆗 🆗
2. mnist-12-int8 10K 7 12 🆗 ValueError: slice index 1 of dimension 0 out of bounds. for '{{node strided_slice_5}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_4, strided_slice_5/stack, strided_slice_5/stack_1, strided_slice_5/stack_2)' with input shapes: [1,5,5], [1], [1], [1] and with computed input tensors: input[1] = <1>, input[2] = <2>, input[3] = <1>.
✔️ 3. mnist-12 26K 7 12 🆗 🆗 🆗 🆗
✔️ 4. mnist-7 26K 3 7 🆗 🆗 🆗 🆗
✔️ 5. mnist-8 26K 3 8 🆗 🆗 🆗 🆗

19. mobilenet

vision/classification/mobilenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. mobilenetv2-12-int8 4M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. mobilenetv2-12 13M 7 12 🆗 🆗 🆗 🆗
✔️ 3. mobilenetv2-7 13M 6 10 🆗 🆗 🆗 🆗

20. rcnn_ilsvrc13

vision/classification/rcnn_ilsvrc13

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. rcnn-ilsvrc13-3 207M 3 3 🆗 🆗 🆗 🆗
✔️ 2. rcnn-ilsvrc13-6 207M 3 6 🆗 🆗 🆗 🆗
✔️ 3. rcnn-ilsvrc13-7 205M 3 7 🆗 🆗 🆗 🆗
✔️ 4. rcnn-ilsvrc13-8 205M 3 8 🆗 🆗 🆗 🆗
✔️ 5. rcnn-ilsvrc13-9 205M 3 9 🆗 🆗 🆗 🆗

21. resnet

vision/classification/resnet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. resnet101-v1-7 159M 3 8 🆗 🆗 🆗 🆗
✔️ 2. resnet101-v2-7 158M 3 7 🆗 🆗 🆗 🆗
✔️ 3. resnet152-v1-7 216M 3 8 🆗 🆗 🆗 🆗
✔️ 4. resnet152-v2-7 215M 3 7 🆗 🆗 🆗 🆗
✔️ 5. resnet18-v1-7 42M 3 8 🆗 🆗 🆗 🆗
✔️ 6. resnet18-v2-7 42M 3 7 🆗 🆗 🆗 🆗
✔️ 7. resnet34-v1-7 77M 3 8 🆗 🆗 🆗 🆗
✔️ 8. resnet34-v2-7 77M 3 7 🆗 🆗 🆗 🆗
✔️ 9. resnet50-caffe2-v1-3 92M 3 3 🆗 🆗 🆗 🆗
✔️ 10. resnet50-caffe2-v1-6 94M 3 6 🆗 🆗 🆗 🆗
✔️ 11. resnet50-caffe2-v1-7 91M 3 7 🆗 🆗 🆗 🆗
✔️ 12. resnet50-caffe2-v1-8 91M 3 8 🆗 🆗 🆗 🆗
✔️ 13. resnet50-caffe2-v1-9 91M 3 9 🆗 🆗 🆗 🆗
14. resnet50-v1-12-int8 21M 4 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 15. resnet50-v1-12 92M 4 12 🆗 🆗 🆗 🆗
✔️ 16. resnet50-v1-7 91M 3 8 🆗 🆗 🆗 🆗
✔️ 17. resnet50-v2-7 91M 3 7 🆗 🆗 🆗 🆗

22. shufflenet

vision/classification/shufflenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. shufflenet-3 7M 3 3 🆗 🆗 tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_14"} tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_14"}
2. shufflenet-6 6M 3 8 🆗 🆗 tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_20"} tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_20"}
3. shufflenet-7 6M 3 7 🆗 🆗 tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_26"} tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_26"}
4. shufflenet-8 8M 3 6 🆗 🆗 tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_32"} tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_32"}
5. shufflenet-9 6M 3 9 🆗 🆗 tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_38"} tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_38"}
✔️ 6. shufflenet-v2-10 8M 6 10 🆗 🆗 🆗 🆗
7. shufflenet-v2-12-int8 2M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 8. shufflenet-v2-12 9M 7 12 🆗 🆗 🆗 🆗

23. squeezenet

vision/classification/squeezenet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. squeezenet1 1M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. squeezenet1 5M 7 12 🆗 🆗 🆗 🆗
✔️ 3. squeezenet1 6M 3 3 🆗 🆗 🆗 🆗
✔️ 4. squeezenet1 6M 3 6 🆗 🆗 🆗 🆗
✔️ 5. squeezenet1 5M 3 7 🆗 🆗 🆗 🆗
✔️ 6. squeezenet1 5M 3 8 🆗 🆗 🆗 🆗
✔️ 7. squeezenet1 5M 3 9 🆗 🆗 🆗 🆗
✔️ 8. squeezenet1 5M 3 7 🆗 🆗 🆗 🆗

24. vgg

vision/classification/vgg

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. vgg16-12-int8 101M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. vgg16-12 488M 7 12 🆗 🆗 🆗 🆗
✔️ 3. vgg16-7 489M 3 8 🆗 🆗 🆗 🆗
✔️ 4. vgg16-bn-7 489M 3 8 🆗 🆗 🆗 🆗
✔️ 5. vgg19-7 507M 3 8 🆗 🆗 🆗 🆗
✔️ 6. vgg19-bn-7 508M 3 8 🆗 🆗 🆗 🆗
✔️ 7. vgg19-caffe2-3 512M 3 3 🆗 🆗 🆗 🆗
✔️ 8. vgg19-caffe2-6 512M 3 6 🆗 🆗 🆗 🆗
✔️ 9. vgg19-caffe2-7 509M 3 7 🆗 🆗 🆗 🆗
✔️ 10. vgg19-caffe2-8 509M 3 8 🆗 🆗 🆗 🆗
✔️ 11. vgg19-caffe2-9 509M 3 9 🆗 🆗 🆗 🆗

25. zfnet-512

vision/classification/zfnet-512

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. zfnet512-12-int8 48M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. zfnet512-12 309M 7 12 🆗 🆗 🆗 🆗
✔️ 3. zfnet512-3 310M 3 3 🆗 🆗 🆗 🆗
✔️ 4. zfnet512-6 312M 3 6 🆗 🆗 🆗 🆗
✔️ 5. zfnet512-7 309M 3 7 🆗 🆗 🆗 🆗
✔️ 6. zfnet512-8 309M 3 8 🆗 🆗 🆗 🆗
✔️ 7. zfnet512-9 309M 3 9 🆗 🆗 🆗 🆗

26. duc

vision/object_detection_segmentation/duc

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. ResNet101-DUC-12-int8 68M 4 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 2. ResNet101-DUC-12 247M 4 12 🆗 🆗 🆗 🆗
✔️ 3. ResNet101-DUC-7 248M 3 7 🆗 🆗 🆗 🆗

27. faster-rcnn

vision/object_detection_segmentation/faster-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. FasterRCNN-10 148M 4 10 🆗 🆗 tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} 🆗
2. FasterRCNN-12-int8 36M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
⚠️ 3. FasterRCNN-12 156M 7 12 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow.

28. fcn

vision/object_detection_segmentation/fcn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
⚠️ 1. fcn-resnet101-11 281M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow.
⚠️ 2. fcn-resnet50-11 214M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow.
3. fcn-resnet50-12-int8 29M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
⚠️ 4. fcn-resnet50-12 125M 7 12 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow.

29. mask-rcnn

vision/object_detection_segmentation/mask-rcnn

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. MaskRCNN-10 157M 4 10 🆗 🆗 tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} 🆗
2. MaskRCNN-12-int8 34M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
⚠️ 3. MaskRCNN-12 157M 7 12 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow.

30. retinanet

vision/object_detection_segmentation/retinanet

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. retinanet-9 146M 6 9 🆗 🆗 🆗 🆗

31. ssd

vision/object_detection_segmentation/ssd

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. ssd-10 75M 4 10 🆗 🆗 🆗 🆗
2. ssd-12-int8 30M 7 12 🆗 ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>.
✔️ 3. ssd-12 86M 7 12 🆗 🆗 🆗 🆗

32. ssd-mobilenetv1

vision/object_detection_segmentation/ssd-mobilenetv1

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. ssd_mobilenet_v1_10 24M 5 10 🆗 TypeError: No common supertype of TensorArraySpec(TensorShape([3]), tf.int32, True, True) and TensorArraySpec(TensorShape([3]), tf.int32, None, True).
2. ssd_mobilenet_v1_12-int8 6M 7 12 🆗 TypeError: No common supertype of TensorArraySpec(TensorShape([3, None, None]), tf.float32, True, True) and TensorArraySpec(TensorShape([3, 0, 0]), tf.float32, None, True).
3. ssd_mobilenet_v1_12 24M 7 12 🆗 TypeError: No common supertype of TensorArraySpec(TensorShape([3, None, None]), tf.float32, True, True) and TensorArraySpec(TensorShape([3, 0, 0]), tf.float32, None, True).

33. tiny-yolov2

vision/object_detection_segmentation/tiny-yolov2

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. tinyyolov2-7 58M 5 7 🆗 🆗 🆗 🆗
✔️ 2. tinyyolov2-8 58M 5 8 🆗 🆗 🆗 🆗

34. tiny-yolov3

vision/object_detection_segmentation/tiny-yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. tiny-yolov3-11 32M 6 11 🆗 TypeError: No common supertype of TensorArraySpec(TensorShape([]), tf.int32, True, True) and TensorArraySpec(TensorShape([]), tf.int32, None, True).

35. yolov2-coco

vision/object_detection_segmentation/yolov2-coco

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. yolov2-coco-9 195M 4 9 🆗 🆗 tf.Transpose(tensor<1x64x13x2x13x2xf32>, tensor<6xi32>) -> (tensor<1x64x13x13x2x2xf32>) : {device = ""} 🆗

36. yolov3

vision/object_detection_segmentation/yolov3

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
1. yolov3-10 222M 5 10 🆗 TypeError: No common supertype of TensorArraySpec(TensorShape([]), tf.int32, True, True) and TensorArraySpec(TensorShape([]), tf.int32, None, True).
2. yolov3-12-int8 46M 5 12 🆗 ValueError: Cannot reshape a tensor with 1024 elements to shape [32,64,1,1] (2048 elements) for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](concat_2, Reshape/shape)' with input shapes: [1024,1], [4] and with input tensors computed as partial shapes: input[1] = [32,64,1,1].
⚠️ 3. yolov3-12 222M 5 12 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow.

37. yolov4

vision/object_detection_segmentation/yolov4

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
⚠️ 1. yolov4 232M 6 11 🆗 OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow.

38. fast_neural_style

vision/style_transfer/fast_neural_style

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. candy-8 7M 4 8 🆗 🆗 🆗 🆗
✔️ 2. candy-9 7M 4 9 🆗 🆗 🆗 🆗
✔️ 3. mosaic-8 7M 4 8 🆗 🆗 🆗 🆗
✔️ 4. mosaic-9 7M 4 9 🆗 🆗 🆗 🆗
✔️ 5. pointilism-8 7M 4 8 🆗 🆗 🆗 🆗
✔️ 6. pointilism-9 7M 4 9 🆗 🆗 🆗 🆗
✔️ 7. rain-princess-8 7M 4 8 🆗 🆗 🆗 🆗
✔️ 8. rain-princess-9 7M 4 9 🆗 🆗 🆗 🆗
✔️ 9. udnie-8 7M 4 8 🆗 🆗 🆗 🆗
✔️ 10. udnie-9 7M 4 9 🆗 🆗 🆗 🆗

39. sub_pixel_cnn_2016

vision/super_resolution/sub_pixel_cnn_2016

Status Model Size IR Opset ONNX Checker ONNX-TF Converted TF-TFLite Converted TF-TFLite Converted w/ Select Ops
✔️ 1. super-resolution-10 2M 4 10 🆗 🆗 tf.Transpose(tensor<?x1x3x3x224x224xf32>, tensor<6xi32>) -> (tensor<?x1x224x3x224x3xf32>) : {device = ""} 🆗
From 47829edabf1d91f253132a314d08c55db7e0f432 Mon Sep 17 00:00:00 2001
From: Ryan Govostes <[email protected]>
Date: Wed, 17 Aug 2022 10:20:37 -0400
Subject: [PATCH] Test TensorFlow Lite conversion
---
test/test_modelzoo.py | 81 +++++++++++++++++++++++++++++++------------
1 file changed, 58 insertions(+), 23 deletions(-)
diff --git a/test/test_modelzoo.py b/test/test_modelzoo.py
index 0bf526e..3c21908 100644
--- a/test/test_modelzoo.py
+++ b/test/test_modelzoo.py
@@ -120,8 +120,8 @@ def _get_model_and_test_data():
test_data_file = os.path.join(root, file_name)
test_data_dir = os.path.join(root, file_name.split('.')[0])
new_test_data_file = os.path.join(test_data_dir, file_name)
- os.mkdir(test_data_dir)
- os.rename(test_data_file, new_test_data_file)
+ os.makedirs(test_data_dir, exist_ok=True)
+ os.replace(test_data_file, new_test_data_file)
test_data_set.append(test_data_dir)
return onnx_model, test_data_set
@@ -268,6 +268,28 @@ def _report_convert_model(model):
return '{}: {}'.format(type(ex).__name__, stack_trace[0].strip())
+def _report_convert_model_tflite(model, select_ops=False):
+ """Test conversion and returns a report string."""
+ try:
+ converter = tf.lite.TFLiteConverter.from_saved_model(_CFG['output_directory'])
+ converter.target_spec.supported_ops = [
+ tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops
+ ]
+ if select_ops:
+ converter.target_spec.supported_ops.append(tf.lite.OpsSet.SELECT_TF_OPS)
+ tf_lite_model = converter.convert()
+ return ''
+ except Exception as ex:
+ # don't delete, as we are going to try again shortly
+ #_del_location(_CFG['untar_directory'])
+ #_del_location(_CFG['output_directory'])
+ stack_trace = str(ex).strip().split('\n')
+ if len(stack_trace) > 1:
+ err_msg = stack_trace[-1].strip()
+ return err_msg
+ return '{}: {}'.format(type(ex).__name__, stack_trace[0].strip())
+
+
def _get_inputs_outputs_pb(tf_rep, data_dir):
"""Get the input and reference output tensors"""
inputs = {}
@@ -366,44 +388,50 @@ def _report_model(file_path, results=Results(), onnx_model_count=1):
results.skip_count += 1
else:
if _CFG['verbose']:
- print('Testing', file_path)
+ print('Testing', file_path, 'at', datetime.datetime.now())
model = onnx.load(onnx_model)
ir_version = model.ir_version
opset_version = model.opset_import[0].version
check_err = _report_check_model(model)
convert_err = '' if check_err else _report_convert_model(model)
- run_err = '' if convert_err or len(
- test_data_set) == 0 else _report_run_model(model, test_data_set)
+ tflite_err = '' if convert_err else _report_convert_model_tflite(model)
+ tflite2_err = '' if convert_err or not tflite_err else _report_convert_model_tflite(model, True)
+
+ # We are not "running" the model which normally cleans this stuff up, so
+ # do it here.
+ _del_location(_CFG['untar_directory'])
+ _del_location(_CFG['output_directory'])
- if (not check_err and not convert_err and not run_err and
- len(test_data_set) > 0):
+ if not check_err and not convert_err and not tflite_err:
# https://github-emoji-list.herokuapp.com/
# ran successfully
emoji_validated = ':ok:'
emoji_converted = ':ok:'
- emoji_ran = ':ok:'
+ emoji_tflite = ':ok:'
+ emoji_tflite2 = ':ok:'
emoji_overall = ':heavy_check_mark:'
results.pass_count += 1
- elif (not check_err and not convert_err and not run_err and
- len(test_data_set) == 0):
- # validation & conversion passed but no test data available
+ elif not check_err and not convert_err and not tflite2_err:
emoji_validated = ':ok:'
emoji_converted = ':ok:'
- emoji_ran = 'No test data provided in model zoo'
- emoji_overall = ':warning:'
- results.warn_count += 1
+ emoji_tflite = tflite_err
+ emoji_tflite2 = ':ok:'
+ emoji_overall = ':heavy_check_mark:'
+ results.pass_count += 1
elif not check_err and not convert_err:
- # validation & conversion passed but failed to run
+ # conversion to tf passed, but to tflite did not
emoji_validated = ':ok:'
emoji_converted = ':ok:'
- emoji_ran = run_err
+ emoji_tflite = tflite_err
+ emoji_tflite2 = tflite2_err
emoji_overall = ':x:'
results.fail_count += 1
elif not check_err:
# validation pass, but conversion did not
emoji_validated = ':ok:'
emoji_converted = convert_err
- emoji_ran = ':heavy_minus_sign:'
+ emoji_tflite = ':heavy_minus_sign:'
+ emoji_tflite2 = ':heavy_minus_sign:'
if ('BackendIsNotSupposedToImplementIt' in convert_err or
'OpUnsupportedException' in convert_err):
# known limitations
@@ -419,15 +447,22 @@ def _report_model(file_path, results=Results(), onnx_model_count=1):
# validation failed
emoji_validated = check_err
emoji_converted = ':heavy_minus_sign:'
- emoji_ran = ':heavy_minus_sign:'
+ emoji_tflite = ':heavy_minus_sign:'
+ emoji_tflite2 = ':heavy_minus_sign:'
emoji_overall = ':x:'
results.fail_count += 1
-
- results.append_detail('{} | {}. {} | {} | {} | {} | {} | {} | {}'.format(
+
+ print(f'overall -> {emoji_overall}')
+ print(f'validated -> {emoji_validated}')
+ print(f'converted -> {emoji_converted}')
+ print(f'tflite (base ops) -> {emoji_tflite}')
+ print(f'tflite (select ops) -> {emoji_tflite2}')
+
+ results.append_detail('{} | {}. {} | {} | {} | {} | {} | {} | {} | {}'.format(
emoji_overall, onnx_model_count,
file_path[file_path.rindex('/') + 1:file_path.index('.')],
_size_with_units(size_pulled[0]), ir_version, opset_version,
- emoji_validated, emoji_converted, emoji_ran))
+ emoji_validated, emoji_converted, emoji_tflite, emoji_tflite2))
if len(test_data_set) == 0:
_del_location(_CFG['output_directory'])
@@ -539,10 +574,10 @@ def modelzoo_report(models_dir='models',
results.append_detail('')
results.append_detail(
'Status | Model | Size | IR | Opset | ONNX Checker | '
- 'ONNX-TF Converted | ONNX-TF Ran')
+ 'ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops')
results.append_detail(
'------ | ----- | ---- | -- | ----- | ------------ | '
- '----------------- | -----------')
+ '----------------- | ------------------- | -----------')
onnx_model_count = 0
file_path = ''
for item in sorted(files):
--
2.32.1 (Apple Git-133)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment