Created
April 6, 2026 12:38
-
-
Save jacobsapps/c19b740d705ac2dda84b6bc3ec9518a1 to your computer and use it in GitHub Desktop.
Compute Embedding from Apple Photos, Face Tagging, and Embeddings
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| func computeEmbedding( | |
| face: VNFaceObservation, | |
| image: CGImage, | |
| model: MLModel | |
| ) async throws -> [Float]? { | |
| // 1. crop the face out of the original image | |
| guard let crop = cropFace(observation: face, from: image) else { return nil } | |
| // 2. resize the face to the model's input size | |
| let pixelBuffer = try makePixelBuffer(from: crop, size: 160) | |
| let input = try MLDictionaryFeatureProvider(dictionary: [ | |
| "image": MLFeatureValue(pixelBuffer: pixelBuffer) | |
| ]) | |
| // 3. run the model | |
| let prediction = try await model.prediction(from: input) | |
| guard let array = prediction.featureValue(for: "embedding")?.multiArrayValue else { | |
| return nil | |
| } | |
| // 4. normalize the output | |
| return l2Normalize(array) | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment