This document explains the logic used to dynamically generate a master list of GCP resources and their corresponding utility scripts. The master list is automatically created by scanning the utility scripts in the project and mapping them to human-readable resource descriptions. This approach ensures that the mapping is always up to date and reduces manual intervention.
A Python script scans the utils/
directory, where all the utility scripts for managing GCP resources are stored. The script specifically looks for Python files that start with create_
, as these are the scripts designed to create or manage GCP resources.
The script infers a human-readable name for each GCP resource by processing the names of the utility scripts. It follows these steps:
- Convert underscores to spaces: This makes the name more readable.
- Remove the prefix
create_
: This focuses on the resource name itself. - Title-case the result: This gives a formal, human-readable appearance to the resource name.
For example:
- A script named
create_bigquery_dataset.py
would be inferred as "Bigquery Dataset". - A script named
create_cloud_storage_bucket.py
would be inferred as "Cloud Storage Bucket".
The script generates a JSON file (gcp_resource_mapping.json
) that maps these inferred human-readable names to their corresponding utility script names. This mapping is stored as key-value pairs in the JSON file:
- Key: Human-readable GCP resource name (in lowercase).
- Value: The name of the utility script function.
The master script that manages GCP resources uses this generated JSON file. It dynamically reads the mapping and calls the appropriate utility functions to manage the resources detected in the notebook.
- Automation: No need for manual updates to the resource mapping. The script automatically keeps the mapping up to date by scanning the
utils/
directory. - Consistency: The mapping is always consistent with the actual utility scripts available in the project.
- Scalability: As new GCP resources and utility scripts are added, the script can easily accommodate them without requiring additional manual steps.
Here’s an example of what the generated gcp_resource_mapping.json
might look like:
{
"cloud storage bucket": "create_bucket",
"bigquery dataset": "create_bigquery_dataset",
"pub/sub topic": "create_pubsub_topic",
"cloud functions": "create_cloud_function"
}