Skip to content

Instantly share code, notes, and snippets.

@djeikyb
Last active May 17, 2022 17:57
Show Gist options
  • Save djeikyb/12b75a08c521c6db80af43109ac13068 to your computer and use it in GitHub Desktop.
Save djeikyb/12b75a08c521c6db80af43109ac13068 to your computer and use it in GitHub Desktop.

Shell into an AWS cloud docker container (ecs / fargate) without ssh

Like docker exec or docker compose exec!

You want to think of it a little differently than the docker version though. With docker, you can exec into any container managed by your own docker engine. With aws, you can't. You have to start an exec capable container whenever you need to debug a docker image.

When to use this

You published a new docker image, and something goes wrong when containers are made from that image. ECS doesn't start containers with the exec capability by default. It's probably a good "security in-depth" practice. And remember, containers are ephemeral, so the exec enabled container will be replaced sooner or later according to the normal fargate service-task lifecycle.

Steps

First, enable the execute-command feature:

aws ecs update-service \
  --cluster dev \
  --service dev-request-key \
  --enable-execute-command \
  --force-new-deployment \
  --desired-count 1

Once the new container has started, find the task id and run:

aws ecs execute-command \
  --cluster dev \
  --container dev-request-key \
  --interactive \
  --task d6c3248b88e547c89f52ea6af605f117 \
  --command bash

The first time you run the update-service command, it might fail. I had to install an aws cli session manager plugin.

Cribbed from the official docs and countless other search results that lived emphemeral browser tab lives.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment