On macOS the system python crashes with weird bugs.
You get around that using virtualenv and python 2.7
Here's what I did
virtualenv -p /usr/bin/python2.7 env
source env/bin/activate
pip install tensorflow
pip install keras==1.2.2
pip install h5py
pip install coremltools
Once you've done that you need to edit the image dimension inputs on lines 4 & 5 in pose_deploy.prototxt (otherwise the generated CoreML model thinks you want to input an image that's 1x1)
Then just run: python convert.py
You'll end up with a Face.mlmodel that you can import into Xcode.
If you attempt to use this with the Vision framework, nothing will happen. No errors will be thrown you will just get empty results. As of this writing the Vision framework seems to only support classifier models
You can use CoreML directly to make predictions like so:
let pixelBuffer = self.sourceImage.resize(to: CGSize(width: 368, height: 368)).pixelBuffer()
let m = Face()
let output = try? m.prediction(image: pixelBuffer!)
It's pretty slow and I get an output matrix. That's as far as I've gotten. Have not investigated the output data too closely
Hey...! @melito
i am working on it and i generate coreml and then give an image for prediction and i am getting output as MLMutilArray 1 x 1 x 22 x 40 x 40 array
now i am trying to convert it back to image and i get only few lines sometimes and sometime not, and i don't know how to convert it to UIImage or cv::Mat
i follow the below link:
https://gist.github.com/otmb/7b2e1caf3330b97c82dc217af5844ad5
can you help..!