-
-
Save kichiemon/4ba5bf921bc9e4d208db8723da69f0ed to your computer and use it in GitHub Desktop.
#!/bin/sh | |
# e.g. CONTAINER_REGISTRY=asia.gcr.io/your-project-name/gcf/asia-northeast1 | |
CONTAINER_REGISTRY=`WRITE YOUR REGISTRY NAME` | |
IMAGE_LIST=`gcloud container images list --repository=$CONTAINER_REGISTRY | awk 'NR!=1'` | |
for line in $IMAGE_LIST; do | |
gcloud container images delete "$line/worker" --quiet & gcloud container images delete "$line/cache" --quiet & | |
done | |
wait |
It works for my production fine:
- Run the script above
- Re-deploy production as a normal deployment flow
Now I'm not sure if they remain lighter. They grow over time.
Sorry for this noobie question but...how should I run this script? Is like a simple bash script? copy/paste on any folder of my current pc, and simply executed with doble click?
@EricBattle
You need to configure the gcloud
before run it. Look at the first row of the script. It's #!/bin/sh
. You have to use a sh/bash/zsh.
@EricBattle
You need to configure the
gcloud
before run it. Look at the first row of the script. It's#!/bin/sh
. You have to use a sh/bash/zsh.
Thx @contributorpw ! So should I execute those lines on Cloud shell from Google Gloud Platform?
@contributorpw Thank you for answering.
@EricBattle
It doesn't matter if you are using a Cloud shell or a local machine as long as you can run the gcloud command.
https://cloud.google.com/sdk/docs/how-to
I could see there were many of the images tagged with the latest
tag or with the name of my functions, but there were also a lot of untagged images in the same folders that I couldn't get rid of just by re-deploying with the latest firebase cli or using the script.
So I ended up with this:
#!/bin/sh
# e.g. CONTAINER_REGISTRY=asia.gcr.io/your-project-name/gcf/asia-northeast1
CONTAINER_REGISTRY='WRITE YOUR REGISTRY NAME'
LIMIT='unlimited' # change to LIMIT=1 if you want to test small
# DRY_RUN=1 # uncomment this to only list, but not delete
IMAGE_LIST=`gcloud container images list --repository=$CONTAINER_REGISTRY --limit=$LIMIT --format="get(name)"`
for image in $IMAGE_LIST; do
echo "Image 1: $image"
DIGEST_LIST=`gcloud container images list-tags $image --format="get(digest)"`
for digest in $DIGEST_LIST; do
echo " -> Digest: $digest"
if [ -z "$DRY_RUN" ]; then
gcloud container images delete $image@$digest --force-delete-tags --quiet > /dev/null 2>&1
fi
done
SUB_LIST=`gcloud container images list --repository=$image --format="get(name)"`
for sub in $SUB_LIST; do
echo " Image 2: $sub"
DIGEST_LIST=`gcloud container images list-tags $sub --format="get(digest)"`
for digest in $DIGEST_LIST; do
echo " -> Digest: $digest"
if [ -z "$DRY_RUN" ]; then
gcloud container images delete $sub@$digest --force-delete-tags --quiet > /dev/null 2>&1
fi
done
done
done
Hope it helps. Thanks @kichiemon.
Anyone knows where can I found the CONTAINER_REGISTRY name? :(
@EricBatlle Find the Container Registry in the cloud console for your project, then navigate the full path and you can copy-paste from the breadcrumb navigator :)
@contributorpw Just to confirm before I break our whole production backend; I can safely just remove all the image/containers to free up the function mess created by pre firebase-cli 9.14? Do I NEED to redeploy the functions after this, or are you just saying that when new functions are deployed the Containers are lighter