This post was written based on the work of Fahmy Khadiri, Technical Sales Account Manager at Akeyless, in his voice.
In this blog post, I’ll be walking you through Kubernetes authentication and secrets injection using native Kubernetes constructs and the Akeyless Secrets Injection Webhook to fetch secrets from Akeyless Vault Platform into your Kubernetes applications.
In terms of the Agenda, here are the problems we’re solving:
- Why you may not want to use Kubernetes secrets
- How we are addressing this at Akeyless
- Using Kubernetes authentication
- A brief demo of the overall solution (video)
Most of the tools used for the demo are freely available and I’ve listed the prerequisites you’ll need so you can follow along.
What are the problems we are trying to solve?
The first question we want to tackle is, why this approach? Kubernetes has its own key store so why would we want to leverage an external secrets management system instead of using the built-in kubernetes secrets?
The short answer is that Kubernetes secrets have some limitations that make it a non-starter for many enterprise deployments.
Limitations of Kubernetes Secrets
First of all, Kubernetes secrets are stored unencrypted so anyone with basic access to the cluster can literally decode the value of the secrets within the Kubernetes backend storage.
Second, secret sprawl is a problem. You can easily have dozens of secrets stored within yaml files or repositories, and this creates bottlenecks and poses operational risks where those secrets can be inadvertently leaked or compromised.
Third, when it comes to development, there’s a methodology for building software-as-a-service apps called The Twelve-Factor App. It outlines a series of procedures for modern app development best practices, and one of those procedures is for the app not to have any state locally. Everything is provided to it in either environment variables or files.
The point is, when you embrace environment variables, and external files, and external persistent systems, then you end up with a more microservice-based architecture, and you’re able to have one single code base for all lifecycles just based on the environments that are coming in.
How this translates in Kubernetes is that we’re able to get to our end state of having secrets be injected in the environment variables without altering the application, or the application code, and so the developer doesn’t know what those secrets are, and doesn’t need to know what those secrets are. Only the application knows what those secrets are, at runtime.
How are we solving the problem?
Akeyless has a webhook that listens for events and injects an executable into containers inside a pod which then requests secrets from Akeyless Vault Platform through annotations in your pod deployment file.
We have two operation modes of injecting secrets: ‘init’ and sidecar.
The first operation mode of secrets injection is the ‘init’ container. In this mode, secrets are pre-populated into a pod before an application starts, as part of the pod lifecycle. The webhook looks for annotations that correspond to a specific schema. It then adds the ‘init’ container that authenticates and does the work. The application then reads the secrets from Akeyless Vault Platform through environment variables.
And so, at the moment of starting up the application, that’s when the application needs to read the environment variables. This happens right at startup.
The second operation mode of secrets injection is the sidecar container. In this mode, another container runs alongside the ‘init’ container. This sidecar mode has a few benefits, one of which is the ability to track changes of the secrets. We can configure the interval cycle of how frequently we look for any kind of changes to the secret itself and inject the secret into the file system of the pod.
This gives you the flexibility of addressing use cases where:
- The secret could change
- The application is long lasting and you want it to re-authenticate on a regular schedule to get the secret
Here’s a sample architecture of the demo environment I’ll be walking you through.
On the very left hand side, I have a namespace called my-apps and I have two pods running in this namespace. One of the pods will have my ‘init’ annotations to fetch a static secret, and the other pod will have the sidecar annotation to fetch a dynamic secret from a mongo db deployment.
To the right of that I have the K8s injector namespace. This will be the dedicated namespace where we install our Kubernetes webhook injection service. As mentioned earlier, this webhook will listen for events and inject an executable inside a pod which will then fetch secrets from the Akeyless Vault Platform.
Gateway token reviewer
Next, I have the default namespace, and this is where I’ve deployed the gateway token reviewer to authenticate our pods with Akeyless and assigned cluster role binding permissions to listen on all namespaces in the cluster.
In Kubernetes, the API server needs to authenticate every request it receives, we’re going to use the JWT authentication mechanism built into Kubernetes itself. Every Kubernetes cluster has its own JWT authentication, the JSON Web Token, which it uses to authenticate.
We know, based on Kubernetes itself, that every K8s service account has a JWT. So we can use this JWT for authentication, but we have to do it in a known and trusted way. Using something we created, we control, and we trust – because we’re the ones who created it.
And so, the first step here is to create our service account that we know and trust, which is going to be our trusted authority. Its job is to validate the JWT of any service accounts that talk to us and verify the service account is in that namespace.
The other thing we need to consider is that Kubernetes service accounts out of the box are scoped to a single namespace. We want the token reviewer to validate JWT tokens for other namespaces in the cluster, and so we need to give it extra permissions through the cluster role binding.
Next, there are additional pieces of information we need to extract such as the cluster host IP, the cluster issuer, and CA certificate which the gateway will use to communicate to the cluster.
An important point to mention here is that the cluster itself does not interact with the external SaaS directly. It utilizes the gateway as a trusted host for this cluster.
We’re dealing with sensitive information here, like the CA certificate and K8s issuer, so we need to ensure none of this information is exposed to the Akeyless SaaS. The customer Kubernetes cluster doesn’t have to be publicly reachable, it can be private as long as the gateway can interact with the cluster.
Finally, I have my gateway also installed in its own dedicated namespace.
Watch the full video, including the demo, below: