So you want to write a partial helper, eh?
Hand-written code adds all kinds of functionality on top of generated GAPIC client libraries!
Let's look at the most common type of hand-written extension:
>>
Helper Methods added to service object which wraps call to 1 rpc method <<
- π Cloud Vision API example description
- π tl;dr of how the Python Vision partials are authored today
- π¦ Using samplegen to generate a helper function
- π Python example (full version)
- π Python example (using helper)
- π€ Generated helper function
- π Script which demonstrates this hackery
- π YAML definition of the helper (as a 'sample')
- βοΈ Hacks to the .snip to make it go
"for demonstration purposes only"
The Google Cloud Vision API is a great candidate for this because, while the API supports performing various types of image analysis, there is actually only 1 rpc method available. To perform certain types of analysis, the rpc request contains an array of enum values, each representing 1 type of analysis. 1 or many types may be passed at once but, often, folks may want to easily invoke just 1 type of analysis!
Here is an example of a Request
to detect all of the Landmarks in a given image (How-to Guide):
ImageAnnotatorService.BatchAnnotateImages(
BatchAnnotateImagesRequest {
requests = [
AnnotateImageRequest {
image = Image {
source = ImageSource { image_uri = "gs://bucket/path/to/an/image.png" }
}
features = [
Feature { type = Feature.Type.LANDMARK_DETECTION }
]
}
]
}
)
And here is what the code looks like in Python:
uri_for_image_stored_in_cloud_storage = "gs://cloud-samples-tests/vision/landmark.jpg"
from google.cloud import vision
client = vision.ImageAnnotatorClient()
response = client.batch_annotate_images(
requests = [
vision.types.AnnotateImageRequest(
image = vision.types.Image(
source = vision.types.ImageSource(
gcs_image_uri = uri_for_image_stored_in_cloud_storage
)
),
features = [
vision.types.Feature(
type = vision.enums.Feature.Type.LANDMARK_DETECTION
)
]
)
]
)
print(response)
Skip to the FUN PART below π
Wouldn't it be great if we could simply call client.landmark_detection(myImage)
?
Well, you can! It already exists! Check out the current Landmark Detection Python Sample and you'll see it looks like this:
image = vision.types.Image()
image.source.image_uri = uri
response = client.landmark_detection(image=image)
If you're interested in how that helper method is added today, you can checkout the vision_helpers/
folder in the
Python client library for Google Cloud Vision.
add_single_feature_methods
loops over the availableFeature
enum values- For each
Feature
, it dynamically creates a function via_create_single_feature_method
and dynamically adds that function to theVisionHelpers
class - Each dynamically created function for each enum value calls the helper method
annotate_image
, which invokesBatchAnnotateImage
with a single request (rather than an array of requests) - The
VisionHelpers
class is inherited by theImageAnnotatorClient
class using multiple inheritance &@add_single_feature_methods
is invoked as an decorator.
@add_single_feature_methods
class ImageAnnotatorClient(VisionHelpers, iac.ImageAnnotatorClient):
^--
some hand-wavy guessing, I'm not actually familiar with Python's inheritance model or how Python decorators work :P
But that's not what we're here to do. We're here to generate helper methods across all languages.
So let's do so by misusing the sample generation features... π
We all know why we're here. (ππ¦)
Let's repurpose our snippets as helper methods, shall we?
I'm not going to actually add the function to VisionHelpers
or anything, I'm just going to generate
a top-level method called def detect_landmarks
which only accepts a GCS URI and returns the API response.
We won't accept any other kwargs**
to pass along or anything, like we should look into really doing
(the dynamically generated helper represents the request as a Dictionary with splatted kwargs**
: here)
I replaced the Python sample above with the following (and my goal was to make it work):
from generated_helper import detect_landmarks
response = detect_landmarks("gs://cloud-samples-tests/vision/landmark.jpg")
print(response)
... and so I did.
Below is a hacky script that does all the hacky things necessary to make it go!
- clone a fresh
gapic-generator
(because I hack the .snip) - patch
gapic-generator
withsnip.diff
to hack the existing standalone snippet template- remove region tags
- remove print output
- remove commented out defaults
- provide a name for the function in the YAML (abusing unused
title
config field) return response
- compile
gapic-generator
- clone a fresh
googleapis
(because I represent the helper method as a sample) - patch
googleapis
withdetect_landmarks.diff
(adds a "sample" representing the helper method to the Vision V1 GAPIC config) - pull artman docker image
- generate Python "sample" (ie.
def detect_landmarks()
) and save it in the local directory asgenerated_helpers.py
Then, you can run it.
pip install google-cloud-vision
- run the above python example which calls
detect_landmarks()
It should give you the exact same output as the full length sample which calls batch_annotate_images()
directly
π©πΌβπ» the π
Very impressive!
This is indeed something we can hack the samplegen tech for, but I do want to come up with a less hacky and easier way to do this. But that's mostly cosmetic. As we've discussed and as you've shown here, the guts are already in place.
(The only non-minor part would be switching to a pseudo-language, but that would likely happen for samples anyway, so it's independent of adapting the sample technology for partials.)