Skip to content

Instantly share code, notes, and snippets.

@lnattrass
Last active March 4, 2025 01:45
Show Gist options
  • Save lnattrass/7c5dcf420821db20a058b663d7ff9827 to your computer and use it in GitHub Desktop.
Save lnattrass/7c5dcf420821db20a058b663d7ff9827 to your computer and use it in GitHub Desktop.

My favourite base64 string prefix to discover

Hint: it's not LS0t, but close.

This is an old one from the archives.

TLDR

AWS EKS was logging ServiceAccount tokens in plaintext, the very same used to AssumeRoleWithWebIdentity, or connect to the kubernetes API server.

This occurred for us between March 2020 and May 2021.

Discovery

In early 2021, I was tasked with migrating all services at my company to use IAM Roles for Service Accounts. I would be taking over this migration from my colleague, who had completed the pre-work.

While rolling this out in staging, we hit an issue where this broke a service's ability to access a specific cross-account resource.

For some now-forgotten reason, I decided to fire up Cloudwatch Logs and take a look into the EKS cluster audit logs, and what do I find? That magical string... eyJhb...

Hm. This thing looks like a JWT?

I grabbed a copy of the token, and threw it into jwt decode, and yeah, it decodes alright.

Obviously, I reached out to our internal security team and we decided to see if we could use it to assume the role of a service, just to be sure.

I grabbed a token with the correct audience to AssumeRoleWithWebIdentity into one of our service's IAM roles, and ran through it manually:

$ aws sts assume-role-with-web-identity --role-arn <our-role-arn> --role-session-name lnattrass --web-identity-token eyJhb..
{
  "Credentials": {
    "SecretAccessKey": "much",
    "SessionToken": "credentials",
    "AccessKeyId": "present"
  }
  "AmongOtherKeys": {}
}
$ 

Mitigations

We added deny entries for CloudWatch EKS logs to all human roles which did not have a secondary approval ("dual control") in production, which just meant that I had to get a colleague to pair with me if we needed to comb through these logs for some reason.

Response

We raised the issue to AWS, who acknowledged the issue and explained that there was a fix coming, and that it would take a number of weeks to be rolled out to all clusters. There was a suggestion of having a CVE assigned.

Eventually on May 30 at around 6pm EST, we logged our last token, and we didn't hear anything further on it...

Why did this happen?

Prior to Kubernetes v1.15, ServiceAccounts generated a corresponding static Secret, which was mounted into containers that specify the ServiceAccount.

In Kubernetes v1.16, the TokenRequest API was integrated into kubelet, and nodes started to generate short-lived tokens for containers to use.

It seems that this change fell through the cracks, and the Audit log configuration had not been updated to suppress the Responses for these endpoints, thus AWS EKS began logging these responses to CloudWatch.

--

Thanks to @aidansteele for the review, and push to share this, and @nicbono for the speling, and, grammar.. suggestions.

Back to my github home

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment