On macOS the system python crashes with weird bugs.
You get around that using virtualenv and python 2.7
Here's what I did
virtualenv -p /usr/bin/python2.7 env
source env/bin/activate
pip install tensorflow
pip install keras==1.2.2
pip install h5py
pip install coremltools
Once you've done that you need to edit the image dimension inputs on lines 4 & 5 in pose_deploy.prototxt (otherwise the generated CoreML model thinks you want to input an image that's 1x1)
Then just run: python convert.py
You'll end up with a Face.mlmodel that you can import into Xcode.
If you attempt to use this with the Vision framework, nothing will happen. No errors will be thrown you will just get empty results. As of this writing the Vision framework seems to only support classifier models
You can use CoreML directly to make predictions like so:
let pixelBuffer = self.sourceImage.resize(to: CGSize(width: 368, height: 368)).pixelBuffer()
let m = Face()
let output = try? m.prediction(image: pixelBuffer!)
It's pretty slow and I get an output matrix. That's as far as I've gotten. Have not investigated the output data too closely
I never did.
Hopefully I find myself in a spot where I can and do in the near future. How about you a year on :)?