Skip to content

Instantly share code, notes, and snippets.

@nicolasdao
Last active May 2, 2024 02:16
Show Gist options
  • Save nicolasdao/b0e2c996e2f8a8f33eb528519361365c to your computer and use it in GitHub Desktop.
Save nicolasdao/b0e2c996e2f8a8f33eb528519361365c to your computer and use it in GitHub Desktop.
Google Cloud Run. Keywords: google cloud run cloudrun googlecloudrun googlecloud

GOOGLE CLOUD RUN GUIDE

PREREQUISITES:

Table of contents

Forewords

Google Cloud Run is the evolution of Google App Engine. As of 2020, I would suggest to start replacing GAE with GRC. GAE is one of the oldest Google Cloud service and also one of the first truly serverless solution (released in 2008 when AWS Lambda was released in 2014).

With GAE, you deploy code to a serverless infrastructure that only supports a few stacks (NodeJS, Java, Python, Go ...). Contrary to a serverless function (i.e., Google Cloud Function or AWS Lambda), the code is hosted inside a server fully managed by Google that supports multiple concurrent requests (default 80). The server is billed hourly.

With GCR, you build a docker image via Google Cloud Build and then deploy a container to a serverless infrastructure, which means there are no constraints on the supported stacks (this also means it support tools such as git instead of being limited to only deploy code). Like GAE, each container supports concurrent requests (default 80). It includes the extra ability to configure that concurrency (min. 1). As for the billing, it is different from GAE and similar to the serverless functions model which incurs cost based on the following combo:

  • Requests per month
  • CPU time per month (rounded the nearest 100ms)
  • Memory time per month (rounded the nearest 100ms)

Full pricing table here

IMHO, there is no reason to carry on using App Engine. There is also nothing wrong with carrying on using GAE. That being said, the industry is moving towards containers and portable code. In that regard, I classify App Engine as a Cloud 1.0 technology while GRC is part of the Cloud 2.0 family (i.e., multi-cloud portable solutions). As for Docker, for those readers that feel they have to learn, yet again, a new skill, your fears are misplaced. Though Docker is the killer feature that makes Google Cloud Run works, the only extra step required to deploy to GRC is a .dockerfile in your root folder. You don't even need to install Docker or become a Docker expert. However, as with everything in Tech, eventually, knowing more about it will eventually worth its pot of gold.

Getting started

Prerequisites

  1. The billing must be enable on the target project.
  2. The following services have been enabled on target project (run gcloud services list):
    • cloudbuild.googleapis.com
    • containerregistry.googleapis.com
    • run.googleapis.com
    • secretmanager.googleapis.com (Optional. Only required if you wish to store secrets and use them in Cloud Build)

The typical initial NodeJS setup

  1. Create a normal NodeJS project.
  2. Add a Dockerfile in the project root (for an example, please refer to the Simple NodeJS Dockerfile and .dockerignore files example in my Docke guide).
  3. Create a new project using GCloud
  4. Set that project as the default:
    gcloud config set project <PROJECT ID>
    
  5. Make sure that:
    • The billing is enable for this project.
    • The following services have been enabled on this project (run gcloud services list):
      • cloudbuild.googleapis.com (Google Cloud Build)
      • containerregistry.googleapis.com (Google Cloud Container Registry)
      • run.googleapis.com (Google Cloud Run)

    To learn how to manage project services, please refer to this document.

  6. Deploy your new image to the Google Container Registry:
    gcloud builds submit --tag gcr.io/<PROJECT ID>/<IMAGE NAME>
    
    Which is the equivaluent to gcloud builds submit --tag gcr.io/<PROJECT ID>/<IMAGE NAME>:latest (1). Alternatively, you can also add a Docker tag to help versioning your images:
    gcloud builds submit --tag gcr.io/<PROJECT ID>/<IMAGE NAME>:<TAG NAME>
    
  7. Deploy your container from that image into a SERVICE:
    gcloud run deploy <SERVICE NAME> --region <REGION> --image gcr.io/<PROJECT ID>/<IMAGE NAME> --platform managed
    

    <SERVICE NAME> and <REGION>(2) are optional. If you don't provide it, you'll be prompted to do so in the terminal. In a CI/CD environment, it is recommended to explicitly set those up, otherwise, the deployment will freeze until an answer is provided. By default, the gcloud run deploy deploy a service that cannot be publicly accessed. If the service must be publicly accessible, add the --allow-unauthenticated flag. More about this topic in the Authentication section.

There are many additional ways to configure the service (e.g., memory, concurrency, environment variables). To lean more about those configurations, please jump to the next Configure your Cloud Run Service section.

(1) Running this command with no <TAG NAME> (which is equivalent to a tag name set to latest) will update the tag of the previous image tagged with latest to -. (2) For a list of all supported regions, please refer to the annex section Supported regions.

Automating the deployments with NPM scripts

Though you can deploy your project manually and automate it with NPM scripts, overtime, the best practice is to setup a CI/CD pipeline as explained the the CI/CD pipeline section.

Once the steps 1 to 3 above are done, steps 4 and 5 can be automated as follow with NPM scripts:

  1. Install standard-version to help version your NodeJS project:
    npm i standard-version --save-dev
    
  2. Create a new .test.env file in your root folder as follow:
    #!/bin/sh
    
    export PROJECT_ID=<YOUR PROJECT ID>
    export IMAGE=<YOUR IMAGE NAME>
    export REGION=<YOUR REGION>
    

    For a list of all the available region, check the Supported regions section.

  3. Add the following scripts in your package.json:
    "scripts": {
    	"set:project": "gcloud config set project $PROJECT_ID",
    	"deploy:image": "gcloud builds submit --tag gcr.io/$PROJECT_ID/$IMAGE:$npm_package_version",
    	"deploy:container": "gcloud run deploy $IMAGE --region $REGION --image gcr.io/$PROJECT_ID/$IMAGE:$npm_package_version --platform managed",
    	"deploy": "source .test.env && npm run set:project && npm run deploy:image && npm run deploy:container",
    	"rls": "standard-version --release-as",
    	"v": "echo $npm_package_version"
    }
    

When you wish to deploy, simply follow these steps:

  1. Check the current app version:
    npm run v
    
  2. Bump the version:
    npm run rls 0.0.2
    
  3. Deploy:
    npm run deploy
    

Configure your Cloud Run Service

https://cloud.google.com/run/docs/configuring/memory-limits

Memory

WARNING: As per usual with GCloud, make sure the current active config is set with the correct project ID.

By default, the memory associated to a service is 256MiB. You can configure this setting when the container is deployed to the service:

gcloud run deploy <SERVICE NAME> --image gcr.io/PROJECT-ID/helloworld --platform managed --memory <SIZE>

Or by updating a running service:

gcloud run services update <SERVICE NAME> --memory <SIZE>

The <SIZE> can be:

  • <X>Gi
  • <X>Mi
  • <X>Ki

The max memory for the managed mode (as opposed as deployments to Kubernetes) is 2Gi. To calculate what you need to provision, use this formula: (<standard memory> + <memory per request>) * <concurrency level>. concurrency level is the maximum number of request that a single container instance can handle before spawning a new instance. The default is 80.

Environment variables

WARNING: As per usual with GCloud, make sure the current active config is set with the correct project ID.

You can configure this setting when the container is deployed to the service:

gcloud run deploy <SERVICE NAME> --image gcr.io/PROJECT-ID/helloworld --platform managed --update-env-vars KEY1=VALUE1,KEY2=VALUE2

Or by updating a running service:

gcloud run services update <SERVICE NAME> --update-env-vars KEY1=VALUE1,KEY2=VALUE2

Region

WARNING: As per usual with GCloud, make sure the current active config is set with the correct project ID.

You can configure this setting when the container is deployed to the service:

gcloud run deploy <SERVICE NAME> --region asia-east1 --image gcr.io/PROJECT-ID/helloworld --platform managed

For a list of all supported regions, please refer to the annex section Supported regions.

Google Cloud Build

Even though it might not be obvious, Google Cloud Build plays a critical part in your Google Cloud Run deployments. When we previously deployed our first NodeJS project, we used those two commands:

gcloud builds submit --tag gcr.io/<PROJECT ID>/<IMAGE NAME>:<TAG NAME>

Then

gcloud run deploy <SERVICE NAME> --region <REGION> --image gcr.io/<PROJECT ID>/<IMAGE NAME> --platform managed

The first command has actually nothing to do with Google Cloud Run. That command requests Google Cloud Build to build a new Docker image with your project inside and then store it in Google Cloud Container Registry. The second command is the one that deploys a new container to Google Cloud Run.

As your workflows becomes more advanced, you may need to add more configuration steps as part of your build step (e.g., managing environment variables, pulling private artifacts, ...). To manage those more advanced steps, Google Cloud Build uses a declarative approach via a cloudbuild.yaml file.

When adding a cloudbuild.yaml file inside your project's root folder, the first command above is replaced with:

gcloud builds submit --config cloudbuild.yaml

In reality, when a Dockerfile is used without a cloudbuild.yaml, it is equivalent to having a default cloudbuild.yaml setup as follow:

steps:
- name: 'gcr.io/cloud-builders/docker'
  args: [ 'build', '-t', 'gcr.io/${PROJECT_ID}/<IMAGE NAME>:<TAG NAME>', '.' ]
images:
- 'gcr.io/${PROJECT_ID}/<IMAGE NAME>:<TAG NAME>'

Where ${PROJECT_ID} is automatically injected.

More details in the official example at https://cloud.google.com/cloud-build/docs/quickstart-build.

IMPORTANT: Though the syntax technically supports both ${PROJECT_ID} and $PROJECT_ID, I've noticed that there are situations where $PROJECT_ID fails. Therefore, I suggest to stick with ${PROJECT_ID}.

cloudbuild.yaml

This file allows to add sequential steps to your build process. Conceptually, Google Cloud Build works as follow:

  • Define steps (serially by default) to automate(i.e., build, test, deploy, branch, ...) your project.
  • Each step runs in a Docker container. This means that the steps uses a predefined images that containe the tools you need to perform your automation (e.g., one containing NPM and NodeJS). The step is usually quite small and is equivalent to executing a CLI tool.
  • Be default, each step is executed in a working directory called /workspace (this can be configured via the dir field) that is persisted during the entire build process. This is how the assets generated by each step can be passed to the next one throughout the build process.

Official doc located at https://cloud.google.com/cloud-build/docs/build-config

Let's have a look at an example:

steps:
- name: gcr.io/cloud-builders/gcloud
  entrypoint: 'bash'
  args: [ '-c', "gcloud secrets versions access latest --secret=GITHUB_PERSONAL_ACCESS_TOKEN > GITHUB_PERSONAL_ACCESS_TOKEN.txt" ]
- name: 'gcr.io/cloud-builders/docker'
  entrypoint: 'bash'
  args: ['-c', 'docker build --build-arg GITHUB_PERSONAL_ACCESS_TOKEN="$(cat GITHUB_PERSONAL_ACCESS_TOKEN.txt)" -t gcr.io/${PROJECT_ID}/yourimage .']
images:
- 'gcr.io/${PROJECT_ID}/yourimage'

Step 1

Execute the a gcloud command (requires the gcr.io/cloud-builders/gcloud step) using bash (required the bash entrypoint). The command is executed as a string (requires the -c option). This step gets a secret and stores it in a file called GITHUB_PERSONAL_ACCESS_TOKEN.txt in the default workspace directory so that the next step can use it to.

Step 2

Step 3

Substitution variables

Original doc at https://cloud.google.com/cloud-build/docs/configuring-builds/substitute-variable-values

Standard

Out-of-the-box, the cloudbuild.yaml supports a series of variables that can be used to facilitate reusability and flexibility. Cloud Build provides the following default substitutions for all builds:

  • ${PROJECT_ID}: ID of your Cloud project.
  • ${BUILD_ID}: ID of your build.

If a build is invoked by a trigger, Cloud Build supports those extra variables:

  • ${COMMIT_SHA}: the commit ID associated with your build.
  • ${REVISION_ID}: the commit ID associated with your build.
  • ${SHORT_SHA}: the first seven characters of COMMIT_SHA.
  • ${REPO_NAME}: the name of your repository.
  • ${BRANCH_NAME}: the name of your branch.
  • ${TAG_NAME}: the name of your tag.

Custom

It is possible to define custom substitution variables with the substitutions property:

substitutions:
  _SERVICE_NAME: your-service-name
  _REGION: australia-southeast1
  _SERVICE_IMG: gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:v1
steps:
# Create the Docker image with the project
- name: 'gcr.io/cloud-builders/docker'
  entrypoint: 'bash'
  args: ['-c', 'docker build -t $_SERVICE_IMG .']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
  args: ['push', '$_SERVICE_IMG']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
  entrypoint: gcloud
  args: ['run', 'deploy', '$_SERVICE_NAME', '--image', '$_SERVICE_IMG', '--region', '$_REGION', '--platform', 'managed']
options:
  dynamic_substitutions: true
images:
- '$_SERVICE_IMG'

Notice the dynamic_substitutions: true option. Without it, it is not possible to use nested substitution (e.g., _SERVICE_IMG: gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:v1).

INPORTANT: YOU MUST prefix a custom substitution variable with "_".

CI/CD pipeline

Automating deployments

IMHO, a better approach is to combine Pulumi with GitHub Actions to create a safer, more reliable and overall faster CI/CD pipeline. I've documented that setup here.

  1. Make sure that both Cloud Build and Cloud Run are enabled in your project. If not, enable them as follow:
    gcloud services enable cloudbuild.googleapis.com run.googleapis.com
    
  2. Allow Cloud Build to deploy to Cloud Run:
    1. Find the Cloud Build service account project member:
      gcloud projects get-iam-policy <PROJECT ID>		
      
    2. Add the roles/run.admin and roles/iam.serviceAccountUser roles to the Cloud Build service account project member:
      export PROJECT_ID=your-project-id && \
      
      export MEMBER_ID=serviceAccount:[email protected] && \	
      
      gcloud projects add-iam-policy-binding $PROJECT_ID --member=$MEMBER_ID --role='roles/run.admin' && \
      
      gcloud projects add-iam-policy-binding $PROJECT_ID --member=$MEMBER_ID --role='roles/iam.serviceAccountUser'
      
  3. Add a cloudbuild.yaml file in your project's root folder to automate the project's build and deployment:
    substitutions:
      _SERVICE_NAME: your-service-name
      _REGION: australia-southeast1
      _SERVICE_IMG: gcr.io/${PROJECT_ID}/${_SERVICE_NAME}:v1
    steps:
    # Create the Docker image with the project
    - name: 'gcr.io/cloud-builders/docker'
      entrypoint: 'bash'
      args: ['-c', 'docker build -t $_SERVICE_IMG .']
    # Push the container image to Container Registry
    - name: 'gcr.io/cloud-builders/docker'
      args: ['push', '$_SERVICE_IMG']
    # Deploy container image to Cloud Run
    - name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
      entrypoint: gcloud
      args: ['run', 'deploy', '$_SERVICE_NAME', '--image', '$_SERVICE_IMG', '--region', '$_REGION', '--platform', 'managed']
    options:
      dynamic_substitutions: true
    images:
    - '$_SERVICE_IMG'
  4. Add a GitHub trigger to Cloud Build. Login to the Google Cloud Console at https://console.cloud.google.com/, select your project, open Cloud Build and then select Triggers. Once you're there, that's quite straightforward.

Using private GitHub packages

Tl;dr, you must set the ~/.npmrc file with the GitHub personal access token that can access all the private organization. This section demonstrates an unsafe approach and a safer approach to set this up.

In this use case, one or many NPM packages have been hosted privately on GitHub packages. Your JS project hosted on Google Cloud Run depends on those packages, and it requires a ~/.npmrc file setup the right GitHub personal access token in order to install them. This section describes an unsafe approach that demonstrates the concepts. Once the basic concepts are covered, a safer approach developed.

Unsafe approach

Let's assume that we have a simple NodeJS project with in its root folder a .npmrc file that contains our private GitHub packages registry(more details about that type of project here). The only missing bit is to configure the build process to configure the ~/.npmrc file with our GitHub access token. This can be done by updating the Dockerfile from:

FROM node:12-slim

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install --only=prod

COPY . ./

CMD npm start

to

FROM node:12-slim

WORKDIR /usr/src/app

RUN echo "//npm.pkg.github.com/:_authToken=123456789" >> ~/.npmrc
COPY .npmrc ./

COPY package*.json ./

RUN npm install --only=prod

COPY . ./

CMD npm start

Those extra two lines authorize npm install to access the private GitHub package:

RUN echo "//npm.pkg.github.com/:_authToken=123456789" >> ~/.npmrc
COPY .npmrc ./

Obviously, our very sensitive token (123456789) is now exposed in our project. Let's fix that in the next section.

Safe approach

The safe approach is quite simple to understand. Instead of hardcoding the Github personal access token in the Dockerfile, we store that secret in Google Secret Manager and inject it via a build step using a new cloudbuild.yaml file.

  1. Enable the Cloud Secret Manager API and allowing Cloud Build to access it:
    1. Enable the Secret Manager API:

      gcloud services enable secretmanager.googleapis.com
      
    2. Find the project's cloud build member ID (e.g., serviceAccount:[email protected]):

      gcloud projects get-iam-policy <PROJECT ID>
      

      To check the current active project, use gcloud config list

    3. Add the following role on that member:

      gcloud projects add-iam-policy-binding <PROJECT ID> --member='<MEMBER ID>' --role='roles/secretmanager.secretAccessor'
      
  2. Store your GitHub personal access token in Google Secret Manager:
    echo YOUR_GITHUB_PERSONAL_ACCESS_TOKEN | gcloud secrets create GITHUB_PERSONAL_ACCESS_TOKEN --data-file=-
    
  3. Modifying your Dockerfile to replace the hardcoded token with an argument passed to the docker build ... command. Replace
    FROM node:12-slim
    WORKDIR /usr/src/app
    RUN echo "//npm.pkg.github.com/:_authToken=123456789" >> ~/.npmrc
    # Rest of the file...
    
    with
    FROM node:12-slim
    ARG GITHUB_PERSONAL_ACCESS_TOKEN
    WORKDIR /usr/src/app
    RUN echo "//npm.pkg.github.com/:_authToken=$GITHUB_PERSONAL_ACCESS_TOKEN" >> ~/.npmrc
    # Rest of the file...
    

    Notice that to use $GITHUB_PERSONAL_ACCESS_TOKEN variable you must declare a new ARG GITHUB_PERSONAL_ACCESS_TOKEN immediately after the FROM syntax

  4. Add a cloudbuild.yaml file in your project's root folder to similar to:
    steps:
    - name: gcr.io/cloud-builders/gcloud
      entrypoint: 'bash'
      args: [ '-c', "gcloud secrets versions access latest --secret=GITHUB_PERSONAL_ACCESS_TOKEN > GITHUB_PERSONAL_ACCESS_TOKEN.txt" ]
    - name: 'gcr.io/cloud-builders/docker'
      entrypoint: 'bash'
      args: ['-c', 'docker build --build-arg GITHUB_PERSONAL_ACCESS_TOKEN="$(cat GITHUB_PERSONAL_ACCESS_TOKEN.txt)" -t gcr.io/$PROJECT_ID/yourimage .']
    images:
    - 'gcr.io/$PROJECT_ID/yourimage'
    

Authentication

Original doc at https://cloud.google.com/run/docs/authenticating/overview

Public access

By default, deploying a service to Cloud Run is private. This means that no public request can reach that service. This can be changed by either redeploying the service as follow:

gcloud run deploy SERVICE_NAME ... --allow-unauthenticated

Or by updating the service as follow:

gcloud run services add-iam-policy-binding SERVICE_NAME --member="allUsers" --role="roles/run.invoker"

Protected services & service-to-service access

The only way to communicate with a Cloud Run service is via HTTP. When a Cloud Run service is protected (default set up), an OIDC ID token must be passed to the Authorization header of the HTTP request. Though that token is also prefixed with the Bearer scheme, this is not the same as an Access token. This difference is poorly documented in the Google Cloud documentation and creates a lot of confusion. In a nutshell, when dealing with Google own web APIs, use the short-lived access token associated with your service account. When your own web APIs are hosted and automatically protected by Google Cloud (e.g., Cloud Function, Cloud Run), use a short-lived ID token(to learn more about the technical difference between those two types of Open ID tokens, please refer to the in the Open ID access token vs ID token section Annex). Both tokens can be acquired with the google-auth-library package. The following example shows how to retrieve that ID token. This token repesents the identity of the service account associated with the client. That service account must have access to the target Cloud Run service (which means it must have the roles/run.invoker role for that Cloud Run service), otherwise, even though the ID token is provided, it will still fail.

const co = require('co')
const { GoogleAuth } = require('google-auth-library')

const url = 'https://your-cloud-run-service-1234-uc.a.run.app' // In Google Cloud ID token jargon, this URL is called the 'audience'

co(function *() {
	const auth = new GoogleAuth({
		credentials: {
			client_email: process.env.SERVICE_ACCOUNT_CLIENT_EMAIL,
			private_key: process.env.SERVICE_ACCOUNT_PRIVATE_KEY.replace(/\\n/g, '\n')
		}
	})
	const client = yield auth.getIdTokenClient(url)
	const { headers } = yield client.getRequestMetadataAsync()
})

Where headers is similar to:

{
  headers: {
    Authorization: 'Bearer <ID token>'
  }
}

Protected services & end-users

There are three ways to secure you end-users access to a Cloud Run service:

  1. Using Google-signin and grant the user the roles/run.invoker role (more details at https://cloud.google.com/run/docs/authenticating/end-users#google-sign-in).
  2. Make your Cloud Run service public and manually verify each request. Though this is an official recommended suggestion from Google, this sucks at multiple level:
    1. You need to manage the user authentication yourself.
    2. You are billed for each unauthenticated requests.
  3. Use a trick to leverage the ability of service accounts to acquire safe ID tokens. This is an undocumented trick that I came up to while searching for a better solution than #2. This section details this approach. This approach advatanges are:
    1. You don't have to verify your users in your Cloud Run service. You can simply trust the ID token and its claims (incl. custom claims).
    2. You are not charged for unauthenticated requests.

This third approach is what we're going to explaore in the rest of this section, as the first two are quite straightforward and well documented on the internet.

The strategy is similar to service-to-service. A new service account is created just for end-users authentication. Let's call it the user service account. That user service account is configured with the roles/run.invoker so it can request a valid ID token to access the protected Cloud Run service. However, service accounts are not able to acquire Cloud Run ID token with custom claims. Instead, they can acquire Cloud Run ID tokens with the following fixed claims:

{
  aud: 'https://your-cloud-run-id.a.run.app',
  azp: 'your-user-service-account-email',
  email: 'your-user-service-account-email',
  email_verified: true,
  exp: 1597709834,
  iat: 1597706234,
  iss: 'https://accounts.google.com',
  sub: '123456789'
}

Luckily, there is a hack. When that ID token is requested, an audience must be passed. That audience is the URL of the protected Cloud Run service. That URL is returned in the aud claim. However, that audience is not strictly limited to the exact protected Cloud Run service's URL. It also support any variation of that URL with pathname and search params. For example:

const jwt = require('jsonwebtoken')
// Some async other code

const client = await auth.getIdTokenClient('https://your-cloud-run-id.a.run.app/somepath?hello=world')
const { headers } = await client.getRequestMetadataAsync()
console.log(jwt.decode(headers.Authorization.replace('Bearer ', '')))

Outputs something similar to this:

{
  aud: 'https://your-cloud-run-id.a.run.app/somepath?hello=world',
  azp: ...
  ...
}

That ID token is also a valid token that grant access to the protected https://your-cloud-run-id.a.run.app Cloud Run service.

The hack to pass custom claims is to pass them in either the pathname or the search params. My preferred technique is to base64 my JSON claims as follow:

const claims = Buffer.from(JSON.stringify({
	hello: 'world',
	userId: 123
})).toString('base64')

const client = await auth.getIdTokenClient(`https://your-cloud-run-id.a.run.app?claims=${claims}`)

FAQ

How to configure a project in the current active GCloud config?

Check what's the current active config:

gcloud config list

If that config does not show the correct project ID, change it as follow:

gcloud config set project <PROJECT ID>

If you don't know the exact ID of the project, list all your projects as follow:

 gcloud projects list

To learn more about GCloud, please refer to my GCLOUD CLI GUIDE.

How to locally a connect to a protected Cloud Run service?

If you're a user with anough GCP privilieges and if you're logged in to the GCloud CLI, then you should generate a short-lived id_token and use it with your HTTP requests to the protected Cloud Run. One approach is to set it up as an environment variables in your package.json as follow:

	"scripts": {
		"set_dev_id_token": "ID_TOKEN=\"$(gcloud auth print-identity-token)\"",
		"start": "npm run set_dev_id_token node index.js"
	}

When you run npm start, you'll be able to use the ID token via process.env.ID_TOKEN.

Annex

Supported regions

As of June 2020, Google Cloud Run is not GA. The current list of supported regions are:

  • asia-east1 (Taiwan)
  • asia-northeast1 (Tokyo)
  • europe-north1 (Finland)
  • europe-west1 (Belgium)
  • europe-west4 (Netherlands)
  • us-central1 (Iowa)
  • us-east1 (South Carolina)
  • us-east4 (Northern Virginia)
  • us-west1 (Oregon)
  • asia-southeast1 (Singapore)
  • australia-southeast1 (Sydney)
  • northamerica-northeast1 (Montreal)

The official list may be more up-to-date: https://cloud.google.com/run/docs/locations

Pricing

GCR pricing is different from GAE (pay per server-hour) and similar to the serverless functions model which incurs cost based on the following combo:

  • Requests per month
  • CPU time per month (rounded the nearest 100ms)
  • Memory time per month (rounded the nearest 100ms)

Both GRC(1) and GAE(2) offer free quotas. I suspect GCR could be slightly cheaper in certain circumstances as the CPU and memory time is rounded to the nearest 100ms, while App Engine is billed per hour. With GAE, the first request spawns a new instance that stays alive for a certain amount of time even when the request has returned. GAE does that to decrease the negative performance due to cold starts. Though no other requests might hit that server, you are still charged for it. With GCR, you only pay for resources consumed within the request lifetime.

Open ID access token vs ID token

In the OIDC protocol:

  • ID token is a JWT token that contains explicit claims about the agent's identity. It is an optimization strategy to access identity data quickly without having to execute a lookup in a slower persistent storage. ID token are used for authentication, not for resource access. They do not define the concept of scopes.
  • Access token can be a JWT token, but this is not a requirement. They are used as bearer token to validate resources access. That validation is done via scopes. The only piece of identity that am Access token contains is the agent's ID.

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment