Like docker exec or docker compose exec!
You want to think of it a little differently than the docker version though. With docker, you can exec into any container managed by your own docker engine. With aws, you can't. You have to start an exec capable container whenever you need to debug a docker image.
You published a new docker image, and something goes wrong when containers are made from that image. ECS doesn't start containers with the exec capability by default. It's probably a good "security in-depth" practice. And remember, containers are ephemeral, so the exec enabled container will be replaced sooner or later according to the normal fargate service-task lifecycle.
First, enable the execute-command feature:
aws ecs update-service \
--cluster dev \
--service dev-request-key \
--enable-execute-command \
--force-new-deployment \
--desired-count 1
Once the new container has started, find the task id and run:
aws ecs execute-command \
--cluster dev \
--container dev-request-key \
--interactive \
--task d6c3248b88e547c89f52ea6af605f117 \
--command bash
The first time you run the update-service command, it might fail. I had to install an aws cli session manager plugin.
Cribbed from the official docs and countless other search results that lived emphemeral browser tab lives.