Skip to content

Instantly share code, notes, and snippets.

@tzvsi
Last active August 9, 2019 10:48
Show Gist options
  • Save tzvsi/aa89fabb9e01bdffc595db11cdfc7ade to your computer and use it in GitHub Desktop.
Save tzvsi/aa89fabb9e01bdffc595db11cdfc7ade to your computer and use it in GitHub Desktop.
Example model conversion commands

ML Model Conversions

MMdnn

$ sudo -H pip3 install -U git+https://github.com/Microsoft/MMdnn.git@master
$ sudo -H pip3 install onnx-tf
$ mmconvert -h
usage: mmconvert [-h]
                 [--srcFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}]
                 [--inputWeight INPUTWEIGHT] [--inputNetwork INPUTNETWORK]
                 --dstFramework
                 {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}
                 --outputModel OUTPUTMODEL [--dump_tag {SERVING,TRAINING}]

optional arguments:
  -h, --help            show this help message and exit
  --srcFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}, -sf {caffe,caffe2,cntk,mxnet,keras,tensorflow,tf,pytorch}
                        Source toolkit name of the model to be converted.
  --inputWeight INPUTWEIGHT, -iw INPUTWEIGHT
                        Path to the model weights file of the external tool
                        (e.g caffe weights proto binary, keras h5 binary
  --inputNetwork INPUTNETWORK, -in INPUTNETWORK
                        Path to the model network file of the external tool
                        (e.g caffe prototxt, keras json
  --dstFramework {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}, -df {caffe,caffe2,cntk,mxnet,keras,tensorflow,coreml,pytorch,onnx}
                        Format of model at srcModelPath (default is to auto-
                        detect).
  --outputModel OUTPUTMODEL, -om OUTPUTMODEL
                        Path to save the destination model
  --dump_tag {SERVING,TRAINING}
                        Tensorflow model dump type

Keras -> Tensorflow -> OpenVINO

$ python3 keras2tensorflow/keras_to_tensorflow.py \
--input_model="OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5" \
--input_model_json="OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json" \
--output_model="models/tensorflow/weights.pb"
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model models/tensorflow/weights.pb \
--output_dir irmodels/tensorflow/FP16 \
--input input_1 \
--output global_average_pooling2d_1/Mean \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	input_1
	- Output layers: 	global_average_pooling2d_1/Mean
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.xml
[ SUCCESS ] BIN file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP16/weights.bin
[ SUCCESS ] Total execution time: 5.31 seconds. 
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo_tf.py \
--input_model models/tensorflow/weights.pb \
--output_dir irmodels/tensorflow/FP32 \
--input input_1 \
--output global_average_pooling2d_1/Mean \
--data_type FP32 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	input_1
	- Output layers: 	global_average_pooling2d_1/Mean
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Offload unsupported operations: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Operations to offload: 	None
	- Patterns to offload: 	None
	- Use the config file: 	None
Model Optimizer version: 	1.5.12.49d067a0
/usr/local/lib/python3.5/dist-packages/h5py/__init__.py:34: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters

[ SUCCESS ] Generated IR model.
[ SUCCESS ] XML file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32/weights.xml
[ SUCCESS ] BIN file: /home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/tensorflow/FP32/weights.bin
[ SUCCESS ] Total execution time: 5.59 seconds. 

Keras -> ONNX -> OpenVINO

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df onnx \
-om models/onnx/weights.onnx
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py \
--framework onnx \
--input_model models/onnx/weights.onnx \
--output_dir irmodels/onnx/FP16 \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/onnx/weights.onnx
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/onnx/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
ONNX specific parameters:
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  Cannot infer shapes or values for node "Conv1_relu".
[ ERROR ]  There is no registered "infer" function for node "Conv1_relu" with op = "Clip". Please implement this function in the extensions. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #37. 
[ ERROR ]  
[ ERROR ]  It can happen due to bug in custom shape infer function <UNKNOWN>.
[ ERROR ]  Or because the node inputs have incorrect values/shapes.
[ ERROR ]  Or because input shapes are incorrect (embedded to the model or passed via --input_shape).
[ ERROR ]  Run Model Optimizer with --log_level=DEBUG for more information.
[ ERROR ]  Stopped shape/value propagation at "Conv1_relu" node. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #38. 

Keras -> Caffe -> OpenVINO

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df caffe \
-om models/caffe/weights
$ sudo python3 /opt/intel/computer_vision_sdk/deployment_tools/model_optimizer/mo.py \
--framework caffe \
--input_model models/caffe/weights.caffemodel \
--input_proto models/caffe/weights.prototxt \
--output_dir irmodels/caffe/FP16 \
--data_type FP16 \
--batch 1
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/caffe/weights.caffemodel
	- Path for generated IR: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/irmodels/caffe/FP16
	- IR output name: 	weights
	- Log level: 	ERROR
	- Batch: 	1
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	Not specified, inherited from the model
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	False
	- Reverse input channels: 	False
Caffe specific parameters:
	- Enable resnet optimization: 	True
	- Path to the Input prototxt: 	/home/xxxx/git/Keras-OneClassAnomalyDetection/models/caffe/weights.prototxt
	- Path to CustomLayersMapping.xml: 	Default
	- Path to a mean file: 	Not specified
	- Offsets for a mean file: 	Not specified
Model Optimizer version: 	1.5.12.49d067a0
[ ERROR ]  Unexpected exception happened during extracting attributes for node block_14_depthwise_BN.
Original exception message: Found custom layer "DummyData2". Model Optimizer does not support this layer. Please, implement extension. 
 For more information please refer to Model Optimizer FAQ (<INSTALL_DIR>/deployment_tools/documentation/docs/MO_FAQ.html), question #45.

Keras -> PyTorch

$ mmconvert \
-sf keras \
-iw OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5 \
-in OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json \
-df pytorch \
-om models/pytorch/weights.pth

13-6. Keras -> Tensorflow -> Tensorflow Lite

$ python3 keras2tensorflow/keras_to_tensorflow.py \
--input_model="OneClassAnomalyDetection-RaspberryPi3/DOC/model/weights.h5" \
--input_model_json="OneClassAnomalyDetection-RaspberryPi3/DOC/model/model.json" \
--output_model="models/tensorflow/weights.pb"
$ sudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev
$ sudo pip3 install keras_applications==1.0.7 --no-deps
$ sudo pip3 install keras_preprocessing==1.0.9 --no-deps
$ sudo pip3 install h5py==2.9.0
$ sudo apt-get install -y openmpi-bin libopenmpi-dev
$ sudo pip3 uninstall tensorflow
$ wget -O tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-1.11.0-cp35-cp35m-linux_armv7l_jemalloc_multithread.whl
$ sudo pip3 install tensorflow-1.11.0-cp35-cp35m-linux_armv7l.whl
$ cd ~
$ wget https://github.com/PINTO0309/Bazel_bin/blob/master/0.17.2/Raspbian_armhf/install.sh
$ sudo chmod +x install.sh
$ ./install.sh
$ git clone -b v1.11.0 https://github.com/tensorflow/tensorflow.git
$ cd tensorflow
$ git checkout v1.11.0
$ ./tensorflow/contrib/lite/tools/make/download_dependencies.sh
$ sudo bazel build tensorflow/contrib/lite/toco:toco
$ cd ~/tensorflow
$ mkdir output
$ cp ~/Keras-OneClassAnomalyDetection/models/tensorflow/weights.pb . #<-- "." required
$ sudo bazel-bin/tensorflow/contrib/lite/toco/toco \
--input_file=weights.pb  \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--output_file=output/weights.tflite \
--input_shapes=1,96,96,3 \
--inference_type=FLOAT \
--input_type=FLOAT \
--input_arrays=input_1 \
--output_arrays=global_average_pooling2d_1/Mean \
--post_training_quantize
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment