Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save iTrauco/2b5f7f4b35836148a705ad5db1f6a606 to your computer and use it in GitHub Desktop.
Save iTrauco/2b5f7f4b35836148a705ad5db1f6a606 to your computer and use it in GitHub Desktop.

Dynamic Generation of GCP Resource Mapping

Overview

This document explains the logic used to dynamically generate a master list of GCP resources and their corresponding utility scripts. The master list is automatically created by scanning the utility scripts in the project and mapping them to human-readable resource descriptions. This approach ensures that the mapping is always up to date and reduces manual intervention.

Steps Involved

1. Scanning the utils/ Directory

A Python script scans the utils/ directory, where all the utility scripts for managing GCP resources are stored. The script specifically looks for Python files that start with create_, as these are the scripts designed to create or manage GCP resources.

2. Inferring Resource Names

The script infers a human-readable name for each GCP resource by processing the names of the utility scripts. It follows these steps:

  • Convert underscores to spaces: This makes the name more readable.
  • Remove the prefix create_: This focuses on the resource name itself.
  • Title-case the result: This gives a formal, human-readable appearance to the resource name.

For example:

  • A script named create_bigquery_dataset.py would be inferred as "Bigquery Dataset".
  • A script named create_cloud_storage_bucket.py would be inferred as "Cloud Storage Bucket".

3. Generating the Mapping

The script generates a JSON file (gcp_resource_mapping.json) that maps these inferred human-readable names to their corresponding utility script names. This mapping is stored as key-value pairs in the JSON file:

  • Key: Human-readable GCP resource name (in lowercase).
  • Value: The name of the utility script function.

4. Using the Generated Mapping

The master script that manages GCP resources uses this generated JSON file. It dynamically reads the mapping and calls the appropriate utility functions to manage the resources detected in the notebook.

Benefits

  • Automation: No need for manual updates to the resource mapping. The script automatically keeps the mapping up to date by scanning the utils/ directory.
  • Consistency: The mapping is always consistent with the actual utility scripts available in the project.
  • Scalability: As new GCP resources and utility scripts are added, the script can easily accommodate them without requiring additional manual steps.

Example of the JSON Mapping

Here’s an example of what the generated gcp_resource_mapping.json might look like:

{
    "cloud storage bucket": "create_bucket",
    "bigquery dataset": "create_bigquery_dataset",
    "pub/sub topic": "create_pubsub_topic",
    "cloud functions": "create_cloud_function"
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment