Skip to content

Sam Gabrail – Platform Engineer

Secure AWS S3 with Akeyless

Incorporating secrets management in your internal developer platform is essential. In this video, I will show you an end to end demo of a platform that you can build to create an S3 bucket. Very simple, but effective.

We will use Port as our front end Portal, Akeyless for secrets management, GitHub actions for our CICD pipeline, and Terraform as our infrastructure as code tool. I will focus on how GitHub actions will generate just in time credentials for AWS using Akeyless.

These credentials will be used by Terraform to provision our S3 bucket. I’ll also show you how to solve the secret zero problem by getting GitHub to authenticate into Akeyless with a JWT token. Ready? Let’s dive in. Let’s now see how to use Akeyless in a workflow for your platform teams.

So we can see that we have Port here and this is our front end Portal, and we’re in the self-service tab and I already have a self-service action here called create an S3 bucket. Whether it’s a developer or even your own platform engineering colleagues can build this, and then we can use this so we can create an S3 bucket. We can give it a name.

Let’s call it TeKanaid test, and we can choose the region. We’ll leave it as US east one. Click execute, and you can see here the details for the execution it’s in progress and we can quickly take a look at the GitHub actions pipeline that’s running here. So what happens is when we clicked on creating the bucket, it started a GitHub actions pipeline, and in the GitHub actions pipeline, we’ll obviously have multiple steps, But one of the steps which is key here is fetching secrets from Akeyless.

So we wanna build an S3 bucket in AWS, and we need access to AWS.

Not only that, we also wanna respond back to Port and give some logs so that you can see what’s going on within Port. What you can see here is that we’re running Akeyless and we can see that we’re pulling some static secrets. We’re pulling the Port client ID, the Port client secret.

We are getting the dynamic secrets from AWS with Akeyless. So basically creating credentials so that Terraform can go out and create the S3 bucket for us. We see that we’re creating a log message here, going back to Port.

So Port will understand what’s going on. You can see that we have one message initiating creation of the S3 bucket and a final message saying that the S3 bucket creation has completed, and here’s the name, TeKanaid test that we just saw. And then the rest is simple. We’re just configuring the AWS credentials, generating a back end HCL to get us to use an S3 bucket for our back end for Terraform, initializing Terraform, formatting Terraform, validating Terraform, and then running a Terraform apply. That’s going to create an AWS S3 bucket called TeKanaid test.

We have a Port entity in in Terraform, which allows us to create an entity inside of Port with the S3 bucket that we’ve created. We see we’ve added the outputs here, the our region, our S3 bucket name, two resources were created, and a final log message back to Port. If we go back to Port, you can see that this has successfully completed. And if you go into our catalog under S3 buckets, we see our S3 bucket has been created with the identifier here and it’s in the US east one region.

And we have an audit log as well. And we can also destroy this bucket very easily. If we go back to S3 buckets, you can go ahead and click action delete an S3 bucket. You can do it from here.

Or if you go to the self-service again, there is an action. You can say delete an S3 bucket, select the one we just created and click on delete.

Once again, that will get things initiated and started here. In a few seconds, you’ll see our GitHub pipeline will show up here.

We click that. Once again, we’ll see the pipeline running. This time, the pipeline is going to run with a destroy action.

And the way this works is that we have a pipeline here that is defined with multiple inputs. So if I were to create a new workflow, you see that it requires the S3 buckets name, requires a region, and then it requires whether you want to apply or destroy.

And in this case, the Port action will populate this input for the GitHub actions. Pipeline with destroy instead of the default apply. We also have the Ports payload that gets used in various places.

So once again, the pipeline is running this time with Terraform destroy.

As you can see, this has completed successfully.

But the key thing here with Akeyless is that we’re able to dynamically grab AWS credentials and not have to worry about any kind of long lived credentials for AWS.

In this instance, we have the dynamic secrets, run for only one hour. But, of course, you can configure that for how long the Terraform and the whole pipeline requires for your resources to get implemented.

Going back to Port, we see a success if we go back to our catalog, and our S3 bucket now has disappeared. Now let’s take a quick look at the configuration on the Akeyless side of things. I’m I’m inside my gateway here.

And if I go into clouds and if I go into my AWS lab zero demos, this is the dynamic secret that gets created.

You can create one on demand like this, which will give you your access key ID and secret access key. And this is valid for sixty minutes, one hour, as you see here. And that’s exactly what our GitHub actions is doing to create those credentials and use them. We can see our, target properties. We’re using this particular target and the permissions here. We have a user policy that allows all of Amazon S3 full access. We’re also part of an Akeyless workshops user group that has other policies as well, and these policies will be attached to the I’m user that gets created by Akeyless.

If we go back to this target, we have a connection details that allows Akeyless to communicate with AWS. We also have an authentication method, which is really key here, the GitHub authentication method, which solves the secret zero problem in that GitHub action itself is already authenticated into Akeyless without having to hard code any kind of secret for the pipeline to access Akeyless, which is beautiful.

And the way this works is that we have an associated role here. We have a few of them, and this is the one we wanna focus on, GitHub role Port.

And in this role, we have a sub claim. And if you notice inside the sub claim, I have a repository sub claim that is tied to that particular repository that we’re working with. So you can see it’s Sam Gabriel, Akeyless platform team supPort, and that’s exactly the same GitHub repo that we’re using here. If I go back to code, you see Sam Gabriel, akeyless platform team supPort.

And what happens is the GitHub pipeline is already authenticated into Akeyless and is able to retrieve secrets from Akeyless depending on the rules that are part of this role. And in the rules, I have the ability to retrieve secrets from the path infra Port and clouds.

So we saw before that clouds has our AWS dynamic secret.

We also have another folder here. I didn’t show you infra Port, and this contains our Port client ID and our Port client secret in the form of static secrets.

So that way the pipeline is able to authenticate into Port and push back the logs that we saw inside of Port.

All of that is managed in our secrets management tool, in this case, Akeyless, which is really nice to see how we can securely provide a platform, an internal developer platform through the Port Portal and using Akeyless as our secrets management. Thanks for watching, and I’ll see you in another video.