Amazon EKS Managed Kubernetes Client Authentication

Amazon EKS managed kubernetes service has a very tight integration with AWS IAM (Identity Access Management).

Among other things, this means that you do not need to worry about issuing kubernetes credentials to your users. They can use AWS authentication credentials to obtain an authenticated session with the kubernetes API.

If you use aws eks update-kubeconfig, it will write a configuration section in your ~/.kube/config file that looks more or less like the following:

When kubectl needs a token to communicate with the given EKS cluster, it invokes aws-iam-authenticator to generate a token.

This is all well and good when you are using kubectl, but what if you are writing software to interact with your EKS cluster? Language specific kubernetes clients aren’t going to know how to or may not be able to fork this process in order to obtain credentials.

Inside Kubernetes Cluster

If your software is intended to be run inside a pod, stop now. You do not need to mess around with credentials to interact with the kubernetes control plane if your software is running inside your cluster.

Much like AWS Instance Profiles for EC2 instances, Kubernetes provides some magic to present credentials to pods running in your cluster. Just use a service account with a RoleBinding to a Role (or a ClusterRoleBinding to a ClusterRole) and set the serviceAccountName attribute on your pod spec.

Outside Kubernetes Cluster

Outside the kubernetes cluster, it is a bit more confusing. aws-iam-authenticator is great, but what is it actually doing? And can pieces of it be re-implemented in other languages with ease?

The answer is yes. But it isn’t exactly obvious and not especially well documented.

I’ll cover what I learned.

EKS Token Format

The first thing to do is look at what the authenticator generates. Here I am asking aws-iam-authenticator to generate a token for the cluster named test:

But what did it actually do?

It is a lot more obvious if you base64 decode the portion of the token following k8s-aws-v1.

In my case it looks like this:

Note: This request has already expired, so although pasting it here does expose my AWS Key ID, no other information is compromised.

Ahhh, ha!

It used my AWS credentials to pre-sign an AWS STS API request for GetCallerIdentity. It uses that pre-signed API call at the literal token!

Kubectl ends up passing this to EKS Kubernetes control plane, which passes it off to the aws-iam-authenticator service that invokes the GET request against AWS STS.

If the GetCallerIdentity request succeeds, then EKS knows:

  1. My AWS IAM identity. Since EKS is already plumbed into AWS IAM, it can easily cross-reference this information in the EKS control plane.
  2. That I am who i say i am.

Clever!

Generating Tokens

Ok, so how do we generate these tokens?

Each of the AWS SDK’s have signed request capabilities. Most of the time it is used by the AWS SDK itself and you don’t need to invoke them directly. However, the signing APIs are available for use, even if they are not particularly well documented.

Here is what I did for Java. This code constructs theGetCallerIdentityrequest, and invokes the AWS4Signer.presignRequest() method to generate the appropriate signature.

I then turn that into a URL, Base64-encode it, and prepend k8s-aws-v1. to it.

 

Works like a charm. The code is here.

One thing to keep in mind is that the x-k8s-aws-id header is part of the signature. So the resulting pre-signed URL will fail authentication unless you pass the the header as well.

curl <pre-signed url> -H "x-k8s-aws-id: <cluster-name>"

Integrating Fabric8 Kubernetes Client

I am currently using the Fabric8 Kubernetes Java client maintained by RedHat. It is a joy to use…robust, fluent and easy to use.

To make it easy to obtain and refresh credentials for use in the client, I wrapped it all up in a utility class takes care of everything. Without this, the tokens would time out after 15 mintues.

The following initializes a KubernetesClient instance to the given Kubernetes

control plane API, with the cluster named mycluster.

It opens a background ScheduledExecutorService that refreshes the token every 5 minutes.

This allows you to keep the same KubernetesClient open for hours or days.

Cool, eh?

Rob Schoening

Rob Schoening

Sign Up For Our Blog

Signup for our blog and receive timely updates of the newest information