Report generated at 2022-08-17T07:09:44Z.
Package | Version |
---|---|
Platform | macOS-12.5-arm64-arm-64bit |
Python | 3.10.6 (main, Aug 11 2022, 13:36:31) [Clang 13.1.6 (clang-1316.0.21.2.5)] |
onnx | 1.12.0 |
onnx-tf | 1.10.0 |
tensorflow | 2.9.0 |
Value | Count |
---|---|
Models | 39 |
Total | 168 |
✔️ Passed | 123 |
9 | |
❌ Failed | 36 |
➖ Skipped | 0 |
text/machine_comprehension/bert-squad
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. bertsquad-10 | 385M | 5 | 10 | 🆗 | 🆗 | tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} | 🆗 |
❌ | 2. bertsquad-12-int8 | 101M | 7 | 12 | 🆗 | ValueError: 'onnx_tf_prefix_bert/embeddings/MatMul:0_output_quantized_cast' is not a valid root scope name. A root scope name has to match the following pattern: ^[A-Za-z0-9.][A-Za-z0-9_.\/>-]*$ | ➖ | ➖ |
✔️ | 3. bertsquad-12 | 384M | 7 | 12 | 🆗 | 🆗 | tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} | 🆗 |
✔️ | 4. bertsquad-8 | 385M | 5 | 8 | 🆗 | 🆗 | tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} | 🆗 |
text/machine_comprehension/bidirectional_attention_flow
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
1. bidaf-9 | 37M | 4 | 9 | 🆗 | BackendIsNotSupposedToImplementIt: CategoryMapper is not implemented. | ➖ | ➖ |
text/machine_comprehension/gpt-2
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. gpt2-10 | 442M | 6 | 10 | 🆗 | 🆗 | tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} | 🆗 |
✔️ | 2. gpt2-lm-head-10 | 578M | 6 | 10 | 🆗 | 🆗 | tf.RealDiv(tensor, tensor) -> (tensor) : {device = ""} | 🆗 |
text/machine_comprehension/gpt2-bs
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. gpt2-lm-head-bs-12 | 634M | 7 | 12 | 🆗 | TypeError: Expected any non-tensor type, but got a tensor instead. | ➖ | ➖ |
text/machine_comprehension/roberta
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. roberta-base-11 | 278M | 6 | 11 | 🆗 | 🆗 | tf.Erf(tensor<?x?x3072xf32>) -> (tensor<?x?x3072xf32>) : {device = ""} | 🆗 |
✔️ | 2. roberta-sequence-classification-9 | 411M | 6 | 9 | 🆗 | 🆗 | tf.Cast(tensor<1xi64>) -> (tensor<1xf64>) : {Truncate = false, device = ""} | 🆗 |
text/machine_comprehension/t5
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. t5-decoder-with-lm-head-12 | 275M | 6 | 12 | 🆗 | 🆗 | tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} | 🆗 |
✔️ | 2. t5-encoder-12 | 186M | 6 | 12 | 🆗 | 🆗 | tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} | 🆗 |
vision/body_analysis/age_gender
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. age_googlenet | 23M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. gender_googlenet | 23M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. vgg_ilsvrc_16_age_chalearn_iccv2015 | 514M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. vgg_ilsvrc_16_age_imdb_wiki | 514M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. vgg_ilsvrc_16_gender_imdb_wiki | 512M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/body_analysis/arcface
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. arcfaceresnet100-8 | 226M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/body_analysis/emotion_ferplus
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. emotion-ferplus-2 | 31M | 3 | 2 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. emotion-ferplus-7 | 31M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. emotion-ferplus-8 | 31M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/body_analysis/ultraface
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. version-RFB-320 | 1M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. version-RFB-640 | 2M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/alexnet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. bvlcalexnet-12-int8 | 39M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. bvlcalexnet-12 | 216M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. bvlcalexnet-3 | 219M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. bvlcalexnet-6 | 219M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. bvlcalexnet-7 | 216M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. bvlcalexnet-8 | 216M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. bvlcalexnet-9 | 216M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/caffenet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. caffenet-12-int8 | 39M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,11,11], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. caffenet-12 | 216M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. caffenet-3 | 219M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. caffenet-6 | 219M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. caffenet-7 | 216M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. caffenet-8 | 216M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. caffenet-9 | 216M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/densenet-121
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
1. densenet-12-int8 | 6M | 7 | 12 | 🆗 | BackendIsNotSupposedToImplementIt: QLinearMul is not implemented. | ➖ | ➖ | |
✔️ | 2. densenet-12 | 29M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. densenet-3 | 32M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. densenet-6 | 29M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. densenet-7 | 29M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. densenet-8 | 29M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. densenet-9 | 29M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/efficientnet-lite4
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. efficientnet-lite4-11-int8 | 12M | 6 | 11 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. efficientnet-lite4-11 | 46M | 6 | 11 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/inception_and_googlenet/googlenet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. googlenet-12-int8 | 5M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. googlenet-12 | 25M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. googlenet-3 | 25M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. googlenet-6 | 25M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. googlenet-7 | 25M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. googlenet-8 | 25M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. googlenet-9 | 25M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/inception_and_googlenet/inception_v1
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. inception-v1-12-int8 | 9M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
❌ | 2. inception-v1-12 | 25M | 7 | 12 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_0"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_0"} |
❌ | 3. inception-v1-3 | 28M | 3 | 3 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_2"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_2"} |
❌ | 4. inception-v1-6 | 28M | 3 | 6 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_4"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_4"} |
❌ | 5. inception-v1-7 | 25M | 3 | 7 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_6"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_6"} |
❌ | 6. inception-v1-8 | 25M | 3 | 8 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_8"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_8"} |
❌ | 7. inception-v1-9 | 25M | 3 | 9 | 🆗 | 🆗 | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_10"} | tf.PyFunc(tensor<1x1024x6x6xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_10"} |
vision/classification/inception_and_googlenet/inception_v2
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. inception-v2-3 | 43M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. inception-v2-6 | 43M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. inception-v2-7 | 40M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. inception-v2-8 | 40M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. inception-v2-9 | 40M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/mnist
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. mnist-1 | 26K | 3 | 1 | 🆗 | 🆗 | 🆗 | 🆗 |
❌ | 2. mnist-12-int8 | 10K | 7 | 12 | 🆗 | ValueError: slice index 1 of dimension 0 out of bounds. for '{{node strided_slice_5}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_4, strided_slice_5/stack, strided_slice_5/stack_1, strided_slice_5/stack_2)' with input shapes: [1,5,5], [1], [1], [1] and with computed input tensors: input[1] = <1>, input[2] = <2>, input[3] = <1>. | ➖ | ➖ |
✔️ | 3. mnist-12 | 26K | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. mnist-7 | 26K | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. mnist-8 | 26K | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/mobilenet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. mobilenetv2-12-int8 | 4M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. mobilenetv2-12 | 13M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. mobilenetv2-7 | 13M | 6 | 10 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/rcnn_ilsvrc13
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. rcnn-ilsvrc13-3 | 207M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. rcnn-ilsvrc13-6 | 207M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. rcnn-ilsvrc13-7 | 205M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. rcnn-ilsvrc13-8 | 205M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. rcnn-ilsvrc13-9 | 205M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/resnet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. resnet101-v1-7 | 159M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. resnet101-v2-7 | 158M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. resnet152-v1-7 | 216M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. resnet152-v2-7 | 215M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. resnet18-v1-7 | 42M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. resnet18-v2-7 | 42M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. resnet34-v1-7 | 77M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 8. resnet34-v2-7 | 77M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 9. resnet50-caffe2-v1-3 | 92M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 10. resnet50-caffe2-v1-6 | 94M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 11. resnet50-caffe2-v1-7 | 91M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 12. resnet50-caffe2-v1-8 | 91M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 13. resnet50-caffe2-v1-9 | 91M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
❌ | 14. resnet50-v1-12-int8 | 21M | 4 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 15. resnet50-v1-12 | 92M | 4 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 16. resnet50-v1-7 | 91M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 17. resnet50-v2-7 | 91M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/shufflenet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. shufflenet-3 | 7M | 3 | 3 | 🆗 | 🆗 | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_14"} | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_14"} |
❌ | 2. shufflenet-6 | 6M | 3 | 8 | 🆗 | 🆗 | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_20"} | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_20"} |
❌ | 3. shufflenet-7 | 6M | 3 | 7 | 🆗 | 🆗 | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_26"} | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_26"} |
❌ | 4. shufflenet-8 | 8M | 3 | 6 | 🆗 | 🆗 | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_32"} | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_32"} |
❌ | 5. shufflenet-9 | 6M | 3 | 9 | 🆗 | 🆗 | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_38"} | tf.PyFunc(tensor<1x272x14x14xf32>, tensor<2xi32>, tensor<2xi32>, tensor<2xi32>, tensor<4xi32>, tensor, tensor<!tf_type.string>, tensor) -> (tensor<*xf32>) : {Tin = [f32, i32, i32, i32, i32, i1, !tf_type.string, i1], Tout = [f32], device = "/job:localhost/replica:0/task:0/device:CPU:0", token = "pyfunc_38"} |
✔️ | 6. shufflenet-v2-10 | 8M | 6 | 10 | 🆗 | 🆗 | 🆗 | 🆗 |
❌ | 7. shufflenet-v2-12-int8 | 2M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 8. shufflenet-v2-12 | 9M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/squeezenet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. squeezenet1 | 1M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. squeezenet1 | 5M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. squeezenet1 | 6M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. squeezenet1 | 6M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. squeezenet1 | 5M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. squeezenet1 | 5M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. squeezenet1 | 5M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 8. squeezenet1 | 5M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/vgg
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. vgg16-12-int8 | 101M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. vgg16-12 | 488M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. vgg16-7 | 489M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. vgg16-bn-7 | 489M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. vgg19-7 | 507M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. vgg19-bn-7 | 508M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. vgg19-caffe2-3 | 512M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 8. vgg19-caffe2-6 | 512M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 9. vgg19-caffe2-7 | 509M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 10. vgg19-caffe2-8 | 509M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 11. vgg19-caffe2-9 | 509M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/classification/zfnet-512
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. zfnet512-12-int8 | 48M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. zfnet512-12 | 309M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. zfnet512-3 | 310M | 3 | 3 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. zfnet512-6 | 312M | 3 | 6 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. zfnet512-7 | 309M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. zfnet512-8 | 309M | 3 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. zfnet512-9 | 309M | 3 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/object_detection_segmentation/duc
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. ResNet101-DUC-12-int8 | 68M | 4 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,3,3], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 2. ResNet101-DUC-12 | 247M | 4 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. ResNet101-DUC-7 | 248M | 3 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/object_detection_segmentation/faster-rcnn
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. FasterRCNN-10 | 148M | 4 | 10 | 🆗 | 🆗 | tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} | 🆗 |
❌ | 2. FasterRCNN-12-int8 | 36M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
3. FasterRCNN-12 | 156M | 7 | 12 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. | ➖ | ➖ |
vision/object_detection_segmentation/fcn
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
1. fcn-resnet101-11 | 281M | 6 | 11 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. | ➖ | ➖ | |
2. fcn-resnet50-11 | 214M | 6 | 11 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. | ➖ | ➖ | |
❌ | 3. fcn-resnet50-12-int8 | 29M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
4. fcn-resnet50-12 | 125M | 7 | 12 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=pytorch_half_pixel is not supported in Tensorflow. | ➖ | ➖ |
vision/object_detection_segmentation/mask-rcnn
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. MaskRCNN-10 | 157M | 4 | 10 | 🆗 | 🆗 | tf.Range(tensor, tensor, tensor) -> (tensor<?xi64>) : {device = ""} | 🆗 |
❌ | 2. MaskRCNN-12-int8 | 34M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
3. MaskRCNN-12 | 157M | 7 | 12 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. | ➖ | ➖ |
vision/object_detection_segmentation/retinanet
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. retinanet-9 | 146M | 6 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/object_detection_segmentation/ssd
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. ssd-10 | 75M | 4 | 10 | 🆗 | 🆗 | 🆗 | 🆗 |
❌ | 2. ssd-12-int8 | 30M | 7 | 12 | 🆗 | ValueError: slice index 3 of dimension 0 out of bounds. for '{{node strided_slice_13}} = StridedSlice[Index=DT_INT32, T=DT_INT8, begin_mask=0, ellipsis_mask=0, end_mask=0, new_axis_mask=0, shrink_axis_mask=1](strided_slice_12, strided_slice_13/stack, strided_slice_13/stack_1, strided_slice_13/stack_2)' with input shapes: [3,7,7], [1], [1], [1] and with computed input tensors: input[1] = <3>, input[2] = <4>, input[3] = <1>. | ➖ | ➖ |
✔️ | 3. ssd-12 | 86M | 7 | 12 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/object_detection_segmentation/ssd-mobilenetv1
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. ssd_mobilenet_v1_10 | 24M | 5 | 10 | 🆗 | TypeError: No common supertype of TensorArraySpec(TensorShape([3]), tf.int32, True, True) and TensorArraySpec(TensorShape([3]), tf.int32, None, True). | ➖ | ➖ |
❌ | 2. ssd_mobilenet_v1_12-int8 | 6M | 7 | 12 | 🆗 | TypeError: No common supertype of TensorArraySpec(TensorShape([3, None, None]), tf.float32, True, True) and TensorArraySpec(TensorShape([3, 0, 0]), tf.float32, None, True). | ➖ | ➖ |
❌ | 3. ssd_mobilenet_v1_12 | 24M | 7 | 12 | 🆗 | TypeError: No common supertype of TensorArraySpec(TensorShape([3, None, None]), tf.float32, True, True) and TensorArraySpec(TensorShape([3, 0, 0]), tf.float32, None, True). | ➖ | ➖ |
vision/object_detection_segmentation/tiny-yolov2
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. tinyyolov2-7 | 58M | 5 | 7 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. tinyyolov2-8 | 58M | 5 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/object_detection_segmentation/tiny-yolov3
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. tiny-yolov3-11 | 32M | 6 | 11 | 🆗 | TypeError: No common supertype of TensorArraySpec(TensorShape([]), tf.int32, True, True) and TensorArraySpec(TensorShape([]), tf.int32, None, True). | ➖ | ➖ |
vision/object_detection_segmentation/yolov2-coco
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. yolov2-coco-9 | 195M | 4 | 9 | 🆗 | 🆗 | tf.Transpose(tensor<1x64x13x2x13x2xf32>, tensor<6xi32>) -> (tensor<1x64x13x13x2x2xf32>) : {device = ""} | 🆗 |
vision/object_detection_segmentation/yolov3
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
❌ | 1. yolov3-10 | 222M | 5 | 10 | 🆗 | TypeError: No common supertype of TensorArraySpec(TensorShape([]), tf.int32, True, True) and TensorArraySpec(TensorShape([]), tf.int32, None, True). | ➖ | ➖ |
❌ | 2. yolov3-12-int8 | 46M | 5 | 12 | 🆗 | ValueError: Cannot reshape a tensor with 1024 elements to shape [32,64,1,1] (2048 elements) for '{{node Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](concat_2, Reshape/shape)' with input shapes: [1024,1], [4] and with input tensors computed as partial shapes: input[1] = [32,64,1,1]. | ➖ | ➖ |
3. yolov3-12 | 222M | 5 | 12 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. | ➖ | ➖ |
vision/object_detection_segmentation/yolov4
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
1. yolov4 | 232M | 6 | 11 | 🆗 | OpUnsupportedException: Resize coordinate_transformation_mode=half_pixel and mode=nearest is not supported in Tensorflow. | ➖ | ➖ |
vision/style_transfer/fast_neural_style
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. candy-8 | 7M | 4 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 2. candy-9 | 7M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 3. mosaic-8 | 7M | 4 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 4. mosaic-9 | 7M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 5. pointilism-8 | 7M | 4 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 6. pointilism-9 | 7M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 7. rain-princess-8 | 7M | 4 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 8. rain-princess-9 | 7M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 9. udnie-8 | 7M | 4 | 8 | 🆗 | 🆗 | 🆗 | 🆗 |
✔️ | 10. udnie-9 | 7M | 4 | 9 | 🆗 | 🆗 | 🆗 | 🆗 |
vision/super_resolution/sub_pixel_cnn_2016
Status | Model | Size | IR | Opset | ONNX Checker | ONNX-TF Converted | TF-TFLite Converted | TF-TFLite Converted w/ Select Ops |
---|---|---|---|---|---|---|---|---|
✔️ | 1. super-resolution-10 | 2M | 4 | 10 | 🆗 | 🆗 | tf.Transpose(tensor<?x1x3x3x224x224xf32>, tensor<6xi32>) -> (tensor<?x1x224x3x224x3xf32>) : {device = ""} | 🆗 |