Admiral Mike Rogers discusses the current state of cybersecurity and how we can better secure our enterprises from malicious attackers.
Panel: Secrets Management Modernization
Dr. Alex Shulman, Cloud Security Practice Leader, Managing Director, EY
As EY Americas Cloud Cybersecurity Leader, Alex focuses on engineering, architectural controls and processes, and security solutions for public cloud, and container platforms.
She holds a PhD, MSc and BSc in Computer Science from Tel Aviv University, has multiple patents and is cited in over 30 scientific publications.
Maria Schwenger, Partner, Cloud Native Build, IBM Consulting
Vinay Puri, Vice President, Head of Security Architecture, Thomson Reuters
Sean Finnerty, Executive Director – Cloud Technology and Services, Merck
Yoav Nathaniel, Senior Cloud Security Architect, Goldman Sachs
Hello, good afternoon, everyone. Is my mic okay? Wonderful. Thanks, everyone, for joining us today. And we’ll start with this wonderful panel of experts in secrets management. And we have some topics that we’re super excited to discuss with you, because we’re experiencing just all the secrets management challenges every day for many years, and these are the topics. And this is one that would like to show today.
So, to introduce myself, Alex Shulman. I’m leading the Cloud Security Practice of EY. America’s dealing with secrets management for many, many years, almost a decade. And I’ll pass it to my guests to introduce themselves. Yoav.
Yoav Nathaniel: Sure. Thank you, Alex. And thank you guys for hosting us. My name is Yoav Nathaniel, and I work in Cloud Security at Goldman Sachs. I focus on firmwide. And we work with just about any cloud security use case you can imagine, including secrets manager, key management and so forth. So, thanks again, Alex, for having us. And off to Maria.
Maria Schwenger: Thank you. Nice to be here. And of course, a lot of questions, please, please, please. My name is Maria Schwenger. Currently, I’m leading Cloud Native Build for Americas, for IBM consulting. What you need to know about me is that I’ve been building DevSecOps application security and data protection program since 2014 for 4 or 5 big Fortune companies. In trying to be proactive, trying to be advanced in innovative, especially in terms of cloud is my passion. So, I’m happy to talk to you about the cloud migration, the cloud native, everything what brings us to the next stage of the cloud.
Dr. Alex Shulman: Thank you, Maria. It is my honor to introduce Vinay.
Vinay Puri: This is Vinay. I work for Thomson Reuters. I lead the Security Architecture there. Prior to joining Thomson Reuters, my background is all about the cybersecurity. 25 years back in defense for 16, and 9 years, I was leading the Deloitte Cloud Security Practice. So, our mission right now in the Thomson Reuters, by end of 2022, we would like to have the secrets management enabled across the company. So, it’s a pain area. And we have 100 plus products in the company. And this is something we would like to have the customer trusting us on the basis of how we develop the products and how we embed the secrets. And we offer these services, security as a services… sorry, software as a services and embedded security in between. So, thanks. Glad to be here. Thanks, Alex.
Dr. Alex Shulman: Thanks for joining us today, Vinay. Sean.
Sean Finnerty: And hi, everybody. Thanks for having me today. I’m Sean Finnerty. I lead a group called Cloud Technology and Services for Merck and company based out in New Jersey. My background is I started in data centers, worked as an infrastructure person for many years, racking servers, pulling cable virtualization. And then I went into security pretty heavily in the middle part of my career, focused on identity and access management in the early days of global identity and access management systems, PKI, incident response, vulnerability and threat. And then I drank the cloud Kool Aid, if you will, in 2012, and got heavy into AWS, security architecture, enterprise builds, all at Merck. Left Merck for a while, went into startups, went to EY, spent a couple of years in consulting. And now, I’m back at Merck leading a very large enterprise transformation, pushing many, many enterprise traditionally-run data center hosted workloads into the cloud, but more importantly, transforming those to adopt modern computing practices on the way to the cloud. So, it’s really exciting time to be a part of the company. Thanks for having me.
Dr. Alex Shulman: Thanks, Sean, and thanks for joining us today. Let’s start with a brief history of secrets management and the evolution of secrets. Because if we’ll look decades ago, we’re mostly discussing passwords. Today, were discussing API keys, connection strings. And actually, many of the breaches that became public recently, these are due to exposure of API keys. API keys to manage cloud, API keys to connect to other services, or for the consumption of third-party tools. So, wanted to get your feedback, if you can share, how do you see this evolution? And how are our approaches to secrets management evolving over time?
Yoav Nathaniel: Sure. Maybe we can go down. But what we see is that part of the biggest challenges around secrets management today, specifically is around API keys that are generated by third-party vendors that don’t have good means of integrating with them to generate a new API key. They don’t have any built-in rotations. They don’t have good restrictions on the API key usage, like geo fencing. I hope that clarifies.
Dr. Alex Shulman: Essentially, these are API keys that clients and organizations are receiving from the vendors, and they do not control the rotation. And what are your thoughts, Vinay?
Vinay Puri: I would like to add, like so if you ask the number of variants. that hasn’t changed much. It’s still the certificates, SSH keys. It’s encryption keys. It’s, again, API secrets and passwords and passwords in the strings. Is the mic working? Sorry, sorry. So, it’s all that around. But what is the major thing is, like I see all these days and AWS KMS is used. But that’s just used as to deliver the secret and just it’s not used for the lifecycle management. So, end-to-end lifecycle is missing when we see the key, right from generation ‘til revocation and expiration and all that, lifecycle is missing. So, that is one thing.
And the major challenge, we see it, people say that we are using tokens, but tokens are used as part of the configuration files, which are openly available. System-to-system calls are made, but the secrets are openly embedded. So, if I take care of those, I can take care of the whole infrastructure. So, these are the pain areas.
Dr. Alex Shulman: Thank you. Yes, excellent examples.
Vinay Puri: So, I think it’s very decentralized, we want to move to the centralized when we can control and make this lifecycle management available for these.
Dr. Alex Shulman: But controlling the lifecycle of these new entities, new types of secrets.
Vinay Puri: Yeah.
Maria Schwenger: Hopefully, I’m not the oldest one here on the stage, but I’m going to go with the history because you are you started your question with history. So, what we had originally was some type of credentials, they’re usually in the code or one of the configuration file on a disk. And one of my first experiences problems that I had was we had a traversal, basically, a hacker who did the traversal and picked up all of the credentials out of the file system. Right? It was not even a specific problem. I also had developers publishing code with credentials on open GitHub, and people using these credentials. So, there is a lot of hard stuff. And then what we told the developer is, “No, no, no, stop. You are not bringing any more credentials in the code. Keep them in a Git repo. Keep them in a file. But don’t put them in the code them. But if you put them somewhere on the disk, then you have to encrypt them.” And then they said, “Oh, okay. But then what to do with this encryption key? Where do I need to put it? I need to put it somewhere.” Right?
So, the path that we started was very cumbersome, was not clear. There was no direction there is no where we are going. There was no like, Vinay mentioned no centralized management. And everything was created originally manually. So, the very, I mean, 80, 90%, I think that’s the latest statistics I read that of the passwords that are created by human are insecure, because they’re manually created. They’re not automatically created. There is no rotation. There is no proper disposal. Other thing is how fast we can react to an incident, stopping certain credentials or rotate changing all the passwords in case of incident. So, these are all the items that we didn’t have before historically looked at. But, I don’t know, did we really need them at certain times? Probably not. Now we do.
Dr. Alex Shulman: Because our systems are more complicated. We’re interacting more with third parties. We’re interacting over the cloud. And we have complicated flows.
Maria Schwenger: Yeah, and that’s true.
Dr. Alex Shulman: So, I assume this is the reason for the evolution.
Maria Schwenger: Because if you take a look right now, we’re talking about keys for container, secret management for container, secret management for Kubernetes, secret management, there’s a lot of more to it, technologies that we didn’t have before.
Dr. Alex Shulman: Exactly. And then new identities, because speaking about containers, how will container authenticate to get the secret? So, in a way, we’re shifting the problem to identity management for these new entities now.
Maria Schwenger: Correct. And actually, to put the discussion back into the context of the sessions in the morning, the new requirements of the Zero Trust, now have applications trusting application. We have machines trusting machines. And it’s a cost of verification. Right? So, that’s the difference.
Dr. Alex Shulman: Yeah, exactly. Sean?
Sean Finnerty: Yeah, just to add, the technology landscape is increasingly getting more and more complex. When we’re looking at our ecosystem, it’s an incredibly diverse mix of 30-year-old technology and cutting-edge technology. And we struggle a lot with dealing with trying to find the magic answer to everything. Right? So, the approach we are forced to take is one of sort of a middle road. And what served us well, and I think serves a lot of companies well in their journey to the cloud, especially when you look at security challenges, is to find simple, I call them, 80/20 solutions to problems. Right? Solve for the 80, and then plan to deal with the 20 that are more complex. It also starts to manifest itself in the talent that you have available to you inside your organization. I can count probably on 2 hands the number of people I have in my organization that actually understand how to do proper API security, all the nuances, token exchange, encryption, key management. It’s a very small number of people that are really, really good at it, and a huge number of people that know how to spell it, and know how to maybe Google what to do.
we have to plan for that and account for the fact that that expertise is not in the organization. We either need to train it, or we need to build systems that will meet them where they’re at in terms of their knowledge and their expertise to make sure that they’re at least clearing the bar that we’re expecting from a security and compliance perspective. And that’s found its way into all aspects of our program, as we think through and we’re touching thousands of applications and moving into the cloud.
Dr. Alex Shulman: Thank you. But simplifying for the 80%, making it easy for developers to consume, creating these common patterns.
Sean Finnerty: Very simple.
Dr. Alex Shulman: That would be as usable to as many people as possible.
Vinay Puri: Creating the patterns is very important, otherwise, you can’t solve this problem.
Dr. Alex Shulman: Easy-to-use patterns covering 80% of the use cases and then keeping the 20% of the outliers. These are excellent insights. Now, let’s take it to the next step of application migration. Because even after these patterns are created, let’s assume they integrate in cloud native with some centralized secrets management. Still, when applications are migrated to the cloud, we’re bringing legacy into the cloud. And in a way, we’re combining applications that were developed for perimeter security on-prem with a new concept of Zero Trust in the cloud. But Zero Trust is not implemented inside applications that we’re doing lift and shift for. So, I wanted to get everyone’s opinion, how can we manage this? What are the rules of the road that we should be establishing for everyone?
Sean Finnerty: I live this every day. So, I’m going to start with this one. This is literally what we’re doing every single day, is trying to figure out how to take applications that struggle with SAML and turn them into modern cloud native applications. The fact of the matter is not going to work for every application. You just have to be realistic. And we hear the term lift and shift a lot. Now we’re saying lift, tweak, and shift. Then it’s the 6 Rs, and all of these different ways to talk about moving things to the cloud. But the reality is, you have to look at each and every architecture, and you have to pick your battles. There will absolutely be applications that we’re going to move and we’re going to put in an old school segmented network with other compensating controls in place, because it’s not worth the money to invest to modernize that application. And quite frankly, we can get the risk mitigated to a point where we’re comfortable with it that we can cordon it off, stick it somewhere, monitor it, and then just move on with our lives.
In other cases, it makes sense to actually look at the architecture and invest some time and money to modernize that application. Generally speaking, that investment comes in 2 areas. One is Zero Trust identity and understanding how authentication, logging, authorization, all that is happening. And then many other times, it’s a software and vulnerability management conversation, which I could easily spend 2 hours talking about. So, I won’t go there. But picking the battles, keeping it simple, identifying those patterns, and then moving applications as you’re looking at the architectures towards the patterns that are desirable, what we like to call paved roads versus off road. So, getting as many apps onto the paved road as possible with the solutions that are standardized and blessed really has been working well for us to be able to make those decisions quickly. “Hey, this app is really, really old. It’s got a bunch of embedded vulnerabilities. The code’s not great. But hey, we can put it in a CI/CD pipeline, mitigate 80% of the risk, and deal with the rest later,” we’ll take that every chance we get right. So, it’s just doing that over and over and over again as we move through.
And then most importantly, in my opinion, catching all the new demand as it comes in and making sure that it’s engineered the new way that we want. And everything in CI/CD, everything with modern good development practices, modern key management, identity access management thinking from day 1, and setting ourselves up, so we don’t have to do this again, 15 years from now.
Maria Schwenger: One of the biggest problems that I had throughout the years, especially when lifting and shifting applications to Cloud has always been the key management. Because… and in the beginning lack of experience, of course, or less experience, let’s put it this way, the key management, or in this time, like Alex mentioned, we were calling the password management at this time, this was actually always an afterthought. Always, after, “Oh, we are now ready to test on the new cloud environment. Oh, what about the keys?” Nobody thought about that? Right? Situations, which probably hopefully we don’t get any more into. But that kind of brought a different path in my career specifically. I started to build a program around the key management. This can be very simple, small, can expand. Especially when you kind of think holistically about your application and data layers, how they interact together, how you need to have this validation or transition in between let’s say tokens. So, all of this requires this holistic approach, this holistic view before you start your big project. So, never, never please leave it as afterthought. It’s very painful, believe me.
Dr. Alex Shulman: Thank you. Vinay?
Vinay Puri: Sure. I would like to share the live scenario, what my friend mentioned, I think on the similar pattern, we have the stack of applications. Right? There’s thousands, it’s not hundreds. And we compartmentalize some of those applications, which are kind of running on the AIX or Unix platforms, even if you want to upgrade the patches on those particular pieces of operating systems, that requires a lot of development required on the app. Right? Let’s park those things in the colocation somewhere. Right? And we’ll touch it later. We’ll put the overarching compensating controls. We’ll put the perimeter controls on that. But where is that investment required? Where the future is. Right? And we have put together the cloud guiding security principles. Right? And every principle has a score attached to it.
You cannot pass and go through the CI/CDs, and you cannot be able to… you won’t be able to migrate if you don’t score 100% there. That’s the benchmark. And these are just the basic principles. I’m not talking about the NIST 300 control and good old frameworks and all the FedRAMP controls. I’m not even talking about that. Basic cloud security principles. You need to have the 6-period kind of agents embedded into the golden AMI. Right? That should be there. And the whole environment should be pristine, it should be scanned. All those basic principles, and that has worked very well. And that’s, we are trying to see the good results out of that as well. So, that’s one approach of getting there and then start bringing the principles of Zero Trust as baby steps.
You’re not going to achieve the Zero Trust in 1 year or 2 years. It’s a journey. And it’s a principles we need to achieve. And it’s it cuts across network, cuts across identity. It spans across all those areas, unless there is a coordinated effort. But, it’s a journey.
Dr. Alex Shulman: So, yes, thank you. Yoav.
Yoav Nathaniel: I think what we did is we took a slightly different approach, but it might not be compatible for every use case. What we did is, we really started from micro-segmentation from the start. And that also includes keys. So, if any new applications are being built on the cloud, or even if we’re lifting and shifting an existing application, it has such a granular level of isolation, for whatever networks it can access, for whatever identities it can access, and whatever keys it can access, that we pretty much embedded this thought in part of the architecture from day 1. And, and so teams are able to access their secrets in a way that’s pretty much compatible with a lot of the toolings that they were using on-premise with like the Java application, the way that it’s pulling, all the other configs, and so forth. And so, I think we’ve had a very… we’ve been very blessed with that. I think it saved us a lot of headaches.
Dr. Alex Shulman: So, let me repeat. In a way, you’re implementing Zero Trust. And because you have this other compensating controls and you’re isolating, you’re reducing the risk of exposed secrets, even if you’re not centralizing the secrets management solution. Is this correct?
Yoav Nathaniel: Exactly. Like Zero Trust from the beginning.
Dr. Alex Shulman: Exactly. And once again, not to resolve but to reduce the risks of exposed secrets, assuming that there will always be some exposed secrets.
Yoav Nathaniel: Yeah. So, we have ways to verify that secrets are not exposed. But exactly, it’s to minimize the blast radius for whatever application this might be.
Dr. Alex Shulman: Yes, and using this as a metric. If the blast radius is low, our risk is low as well. And this is our new metric and new protection mechanism for Zero Trust. Thank you. These are useful insights.
So, let me move to another question. From all of your answers, it is obvious that we need secrets management programs, that no solution will resolve the problem until we have this engagement of all the developers understanding the criticality of secrets management, protecting them, and reducing the amount of exposure. Can you please share your lessons learned from such programs? Anything that our audience can find useful? Because I assume every organization is building such programs today. What would you share?
Maria Schwenger: Okay, I’ll tell you my secret about the secret management. It’s not a big secret. Actually, I presented Zero Trust and DevSecOps at the DevOps World Conferences about a month ago. So, my very first slide has Zero Trust on the left side, DevSecOps on the right side, and the middle portion, I was drawing a frame. And my son was behind me and saying, “Well, why do you have 3? You talk about 2 things.” And I’m like, “Well, this middle one is the most important,” because I separated the software delivery lifecycle, development lifecycle. And actually, I went one step ahead to put the SSDLC, the secure software development lifecycle.
So, for me, the Zero Trust, the DevSecOps, the key management, they come within the software development lifecycle all the time. We do have also lifecycle of the secrets themselves. We do have the different stages of it. So, all these are important, but again, this holistic approach, how the different constituents, the different stakeholders around the company will collaborate within the development lifecycle, the design, from design all the way to production functioning and let’s say remediation if needed, that’s really my secret.
Yoav Nathaniel: Yeah, we found that scanning for secrets in the pipelines is not the easiest thing, especially when they come in all shapes and sizes. But what we did find is making the ability to interact with secrets, especially when they have to go through the hands of a human, just to make it more accessible, make sure that they’re rightly entitled. And if you can allow them to turn their operations into a script, it’s all the better, because you can potentially one day move it from a human inputted secret into something that’s completely automated. And you already have the rights tooling in place for that.
Sean Finnerty: I’d like to add to the SDLC comments. I can completely agree. And it’s as much a cultural aspiration as it is a technology aspiration to build these programs that account for things like secrets management, alongside other SDLC requirements like good documentation and compliance and software versions. And as we’re implementing CI/CD pipelines and as we’re looking at SDLC, at Merck, we have redefined that as the digital SDLC, which is simply our way of saying an SDLC that’s API enabled that you can interact with programmatically from pipelines, rather than having to create PDFs and route things for signatures. That has been a real… has affected a major change in how we think about navigating these legacy processes, which, taken individually, were very challenging for teams. We would oftentimes find work teams working through deployments bottlenecked at a variety of different things along the way. Security would always be a bottleneck. And typically, it was due to secrets management, or due to logging requirements that were poorly defined, or weren’t clear how to implement the technology.
Thinking of it as an ecosystem and creating that clear bar that teams have to get over, and making sure that the documentation is good, the tools are standardized, the SDLC supports these constructs, and that more importantly, there’s samples in code on how to do this for teams so that they don’t have to reinvent that wheel every single time, that has enabled us to advance the entire conversation to a point where we can actually start to spend time talking about, “What are you doing with your keys? How are you securing them? How are you rotating them?” Because we’re not spending all of our time trying to figure out how to get a document built within less than 2 weeks. And we’re spending all this whitespace on things that are relatively low value in terms of actually advancing the ball on security.
So, I really have spent most of my time, frankly, in this job pulling all those pieces together and building that single ecosystem of capabilities, much akin to what you described as these secure SDLC. That’s exactly how I think of it. And that has had a really, really big improvement. It’s given us a lot of headspace to start working through actual security challenges.
Dr. Alex Shulman: Vinay, your lessons learned.
Vinay Puri: I can share the value-add example here, because everybody has covered pretty much. So, embedding the threat modeling with the DevOps teams has really reaped the benefits. That’s the secret sauce. And then also embedding ourselves, right when the user stories are written by the teams of developers and engineers, embed yourself to start writing the security user stories. And if we write the security users’ stories in parallel with the DevOps stories and the developer stories, automatically, all these use cases come out when we are going live. Nobody’s pointing his finger, “You’re telling me to embed these keys and use these keys and enable these logging and monitoring.” Everything is called out well before ahead of time. If they need to hard code and configure some of the accounts which needs to be locked from the system standpoint, it’s already there as a security user story. That’s part of the build lifecycle. And over and above, give the accountability of that to the product teams. Don’t own it. Right?
Sean Finnerty: Yes.
Vinay Puri: And embed yourself with them. Be a champion there. Support them. But let them own them. If they are not passing that UAT from those security user stories, they should be telling themselves, “We have not passed this UAT. We don’t want to go live.” If that is the language one start coming, you will see that things changing.
Dr. Alex Shulman: Thank you. And this is very interesting, because if we’re failing deployment because of secrets that are not properly managed, then my next question is, should we treat secrets in the same way as we’re treating vulnerabilities? And I mean expose secrets? So, if exposed secrets are detected, should we manage it in exactly the same way as we’re managing vulnerabilities today, having inventory of such issues, people who are accountable to resolve them within a certain period of time, and tracking the resolution? What are your thoughts?
Vinay Puri: 100%. So, depending on the environment, vulnerability might be an understatement. It’s probably an incident. Like that’s worth waking someone up for. And I’ve seen several use cases of it. And many times, the secret is, if it can be exposed, and suddenly, you shut down the access to it or you restrict access to it, you obviously have to replace it. You have to assume that someone potentially took it. And so, 100%.
Maria Schwenger: 200% here. So, yes, that software development lifecycle is one. But also keep in mind, the way we think about the secrets today is different. Again, there is no perimeter. We heard in the morning in the session, perimeter is something different. It’s not me connecting through my desktop in the office to the company network. And of course, the perpetrators are inside, the cyber threat is a huge issue for us, and how many credentials stolen are? Right? So, our identifier, the username will be identifier, and the secret will be a password in this case need to be looked in a different way. We need to also have this awareness of how it’s created. Are we creating quality passwords? Right? Or it’s an 8 character 1, 2, 3, 4, 5? How easy is this password to be guessed? It should be automatically generated, probably not manually. How do we store them? How do we retrieve them? Because in many cases, you may have also performance issues. How the keys are kept. How they’re rotated. How do we treat system accounts versus user accounts versus automated application accounts? Right? So, these are all these different things in terms of the functionality around that. Before, we didn’t think this way about them.
Dr. Alex Shulman: 300%?
Vinay Puri: 300% That’s what I wanted to say that.
Sean Finnerty: I don’t know if I can go 400%. That might be too far.
Dr. Alex Shulman: Thank you. So, all of us are in agreement with criticality.
Sean Finnerty: Yeah, I’m trying to think of what to add on top. I mean, I agree that probably incident is closer to how we would think about it than vulnerability. But I think the tracking and accountability. And the theme you’re going to hear from me constantly is education of our ecosystem of professionals, which have a wide range of skills, many of which don’t understand the importance of a lot of the nuanced modern use cases. So, using these as teaching and learning experiences for folks, as we find them, shine light on them. And instead of smacking people and saying, “What did you do?” instead, use it as a learning experience for folks to say, “This is why this is important. This is why we reacted this way. And oh, by the way, we’re updating our security standard to account for this use case. And now we have some patterns in the code repository that allow you to deploy more consistently to address this.” That closed loop has not existed until recently. And I think that is an incredibly important, the human aspect of this can’t be understated. It’s really, we can’t hire fast enough. We can’t find talent fast enough. So, we have to teach our folks why this is important and why we’re all sitting in a room today talking about it. It’s really, really important.
Maria Schwenger: And go with more automation, probably, and wonderful tools like Akeyless.
Dr. Alex Shulman: No vendors. This is a vendor agnostic panel.
Vinay Puri: It’s a shared responsibility, everybody, right?
Dr. Alex Shulman: Yes.
Sean Finnerty: Yeah.
Vinay Puri: It’s no more like identity tower, right?
Dr. Alex Shulman: Yes.
Vinay Puri: It’s a shared responsibility tower.
Dr. Alex Shulman: So, let me make it more complicated for everyone. Because if we’re now learning from vulnerability management programs, if we look back at the history of vulnerability management programs, they’re failing. They’re not as efficient, as effective as we would like them to see. So, what are the lessons learned from vulnerability management programs that we can reapply to make secrets management programs more effective? Because obviously, we cannot do this tracking manually. We cannot continue chasing each and every developer for each and every secret. So, what can we learn from this maybe CISO-driven vulnerability management programs that we can apply now to secrets management?
Sean Finnerty: Maybe I could start with this one. And I’d love to hear from the rest of the panel on this one, because I certainly don’t have all the answers. But I see it as the intersection of shared accountability and responsibility of product teams that are building and centralized services that are governing. That intersection is incredibly important. Historically, it’s been a lot of what I call ‘throw it over the fence’ security, where the CISO’s office is responsible for this part, the application teams are responsible for their part. And those 2 teams don’t talk to each other on a frequent enough basis. And it ends up being doing things for you, rather than doing things with you. I think the building together part of making hybrid squads with talent brought together to achieve the architecture that you want, the security capability that you want for your builds, is incredibly, incredibly important.
I also think release velocity is an incredibly important variable in this equation that is changing, thankfully, away from… our average release velocity, I don’t even know what it is. But if I had to guess, it’s probably twice a year, maybe once a quarter on most of our applications. That, by definition, makes it very hard to change anything, such as patching software. It’s incredibly hard. If you shrink that release velocity even by 50%. get down to once a week or once every 2 weeks, that additional velocity that you have an additional confidence that comes along with your ability to release more frequently allows you to make changes in your architecture more frequently and with a higher confidence level, which allows you to address challenges like software versions being out of date, secrets management, architecture being bad, much, much more frequently.
So, my lived experience is that the combination of those 2 things, building with and creating shared accountability and increase velocity and deployments, allows you to tackle historical challenges like vulnerability management, and emerging challenges like an incredibly complex and increasingly complex technology landscape, including key management and secrets management. Those 2 things, I’m very optimistic that that’s going to really make a meaningful difference.
Maria Schwenger: So, talking about vulnerability, I’ll go back to the Zero Trust area. And again, that’s not my job. I’m sorry. But constantly validate, this validation stage is never old enough. Do we have the proper implementation? If we use patterns, if we go the paved road right, not the offload, probably, we are going to be easier on the right way, it will be natural, it will be faster, it will be streamlined. But we, for example, in my program, I have scripts or use the vendor to do the scanning for our secrets. Do we still have something in the code? Do we have something in the repo that’s not secure? What is our stance at least? So, this validation should be proper.
Another thing that is also important on my site is the policy. Right? The company established policy, you guys mentioned that you’re working, you’re targeting education, targeting the new development digitalization. Well, having this right policies, right standards to implement the policies, right guidelines to show and to help developers and architects to implement the policy is also very important. And again, the key management or the secret management should never be afterthought. It should be right there in our guiding principles.
Yoav Nathaniel: Honestly, not too much to add on top of that. It should 100% be validated automatically. And I think that sometimes, what you see is the architecture team, which typically sits between application developers and the CISO. So, it’s sort of like a hybrid kind of team that ends up creating certain patterns that can mitigate these things in advance, especially if they kind of put a roadblock for the unpaved road, because sometimes they’re able to do that and just force everybody into the paved road. We’ve seen tremendous success whenever we’ve implemented anything like that. And I don’t think secrets are very unique in this in this aspect. I think a lot of vulnerabilities, a lot of misconfigurations should be treated similarly.
Dr. Alex Shulman: So, let me summarize the analogy. If we’re learning from vulnerability management, our lessons learned, first of all create these images that will be redeployed, manage them centrally, don’t wait for everyone to patch independently, because then the load will double, triple, and grow exponentially. This is one thing. Create this paved road with tools, so that it will be easy for everyone, and then automate the management, correct? Increasing the velocity so that we will be improving all the vulnerability management programs, because we will be automating more than what we were doing before. Because definitely, we want to do more. We want to be better. We want to do this faster.
And then the last lesson learned that I captured is that engage everyone. It’s not just the CISO problem. It’s not just an architectural problem. It’s not a problem of product engineering. It is an enterprise-wide problem that everyone should be part of it, because all the engineering teams have secrets. Did I capture everything?
Sean Finnerty: Absolutely.
Dr. Alex Shulman: Thank you. So, then maybe let’s get back to the last type of secrets that we started with, these third-party keys that we cannot rotate. What should we be doing with them?
Vinay Puri: API is certainly painful. Like it depends on the scenario. Some of the organization may not be having a third party. Some of the organizations may have third party. So, let me take scenario number 1 here. Right? API only works well and embedding security into the right stage when you develop the APIs for the internal teams. Right? Generally, you do the mistake, you do not recognize who’s going to be using these APIs from the outside standpoint. Right? So, building the patterns and scenarios is super important here. And first of all, we do that, and the basic principles when we embed the API security in this scenario, authorization, access management, anomalous behavior monitoring, if there is anything going on. And then I think this is what are the key factors I would think about. And using some of the standard protocols like OAuth, and for the authorization of these.
And if at all, the third party plays a role where you are coming from, I think that’s where we need to start thinking about open only the interfaces which are required, and start using the API gateways, start using the throttling. Right? And embed the API gateway in a way you only pass the information through the interfaces, which is must required by them, rather than have them come and then literal moment is all available, then we are defeating the purpose. Right? So, fine, granular access management, authorization, along with the granularity, what do you offer, from those interfaces from the API to the third party is the key here.
Dr. Alex Shulman: Thank you. At all levels. We want this inside organization as a service that we’re providing, as well as our partners. And third-party tools that we’re consuming, in a way driving everyone to adopt this behavior.
Vinay Puri: Absolutely. Like, maybe I’ve covered only 2 use cases wherein the product is used only internally and maybe offered as a software as a service to the outside. Right? These are the 2 use cases that’s covered a lot. Right? And there may be additional use cases. I’m not saying this is the end of everything. Right? But others may have an opportunity.
Dr. Alex Shulman: Thank you, Vinay. Great use cases.
Yoav Nathaniel: Yeah. What we’ve also seen is that, for a single vendor, there might be 5 API keys being generated for it, because there’s 5 downstream systems that are going to be leveraging it and calling its APIs. And so, even though they are potentially the same exact API key in 5 different places, which is duplication, or if it’s different API keys with different levels of control for the same vendor, you still need a good way to map that. We can’t have it just random keys just floating around. And what do you do if that vendor is compromised? How do you trace back to those keys? Where do they sit in your system? So, being able to understand in a moment what keys for which vendors sit where is super, super important, especially because they can’t be rotated? So, some of the practices that can sometimes be done is that, for rotating, every 9 days, you’ll have an engineer go and rotate it manually. Let’s say it comes down to that. How does that engineer find those keys? Right? If you don’t have the proper inventory, what do you do?
Maria Schwenger: Yeah, I have 2 comments actually. First of all, is on the inventory. Right? Keeping my inventory, hard not to say impossible. Again, we don’t have thousands of people working on this. So, this is usually turned out to be a problem, especially if you have some type of incident. So, that’s very, very important, the centralized management.
The second thing, like Yoav mentioned, I would go back to architecture, proper architecture, and thinking about the keys is part of this architecture. I’ll give you one example that’s kind of a little bit interesting, let’s put it this way. So, I have a system, third-party vendor managed system, they have encryption keys, because they encrypt the data on their side. And then I have to pull this data by some ETL process, it ends up in my customer data store. In my customer data store, I have completely different encryption. So, I need to decrypt, encrypt, decrypt, encrypt, then I have different other processes in application for 45, 46 applications dealing with certain other keys that are exchanging data. So, certain tokens. And if you have to take a look at this design architecture all the way, follow the data throughout your enterprise, from a key perspective, it’s a nightmare. Right?
So, this is happening when you build incrementally and when you… but at a certain point, we need to keep up with this architecture and maybe go and rearchitect or optimize whatever we need to do. Because, at a certain point, it’s not optimal.
Sean Finnerty: I think it is partially a governance to Yoav’s point. Right? Governance problem. And as we’re moving to the product model, we’ve been thinking a lot about, what products do we offer internally, to add value in from a governance perspective? So, one product we’ve been talking a lot about is exactly that, inventorying third-party connections, third-party whatever, right? Third-party keys, third-party data exchanges, third-party information sources. And it’s emerging that that’s a pretty important product that we have internally to keep track of all of this. And what’s been driving it is the point Yoav made around, if there’s a breach and we need an inventory of everything that had potential exposure to this, we didn’t have that inventory, we would literally be turning over couch cushions and looking under the carpet to try and find where all this stuff was, because we have thousands of teams building things.
So, we’re doubling down on that from a governance perspective as we think about inventorying, what connections we have in and out from an ingress-egress perspective, but also as teams are building, bringing in third-party services, inventorying those keys and inventorying those connections, and making sure that, there’s a recertification, either annually or biannually, some sort of an interval where someone some human looks at this says, “Hey, is it still fresh? Do we still do business with this company? Do we still use the tools?” just asking the questions, doing the due diligence, we found that that’s been worth the effort. Of course, the question is, how can we automate some of that, take some of the people out of the process? But in the meantime, we’ve got some people-based processes sitting on top of it, and at least sleep a little better at night knowing somebody is keeping track of it.
Dr. Alex Shulman: Thank you for sharing. And let me summarize, because I think this is very insightful what we just revealed over here. Because it will go back to the history. And this is how we started. Originally, we had secrets management tools owned by identity and access management, and managing privileged passwords, and rotating them. Now, we’re actually talking about inventory of API keys and connection strings that we do not own. And we view these resources or assets as our risks now. And we’re discussing having inventory or risk registry for us to manage the associated risks. And at least for myself, we were not discussing this even 5 years ago. So, this is a very interesting evolution of our industry for us to follow. So, thank you for sharing this. I know that we’re almost over with a panel. Are there any additional lessons or insights that you’d like to share with the audience?
Vinay Puri: I think we would like to have some questions addressed to us so that we can answer rather than sharing.
Dr. Alex Shulman: Yes. Wonderful. Yes. Thank you. Questions from the audience. Would really love to hear your thoughts and pain points that you’re experiencing. Yes.
Sean Finnerty: Great question.
Maria Schwenger: Let’s repeat the question. Maybe not everybody heard the question. The question was, if we have to inventory the third-party keys, does this become an area of vulnerability?
Sean Finnerty: Yeah, for me, at least, it’s more about knowledge of the relationship and the fact that a key exists, more so than say managing or escrowing the key or something or doing something tangible with it. So, it’s more of a sort of an audit, an oversight, sort of old school governance approach that we’re taking right now. I am interested to get into a deeper discussion about, could we actually have a service that’s offered that does something active in that relationship, rather than just being basically a CMDB for connections and third-party relationships? That’s sort of where we are today. I’m really interested to see how it grows, though. It’s a really good question.
Maria Schwenger: This also goes a little bit into the incident response time. I will be probably interested to hear if there is too many keys created, too many keys requested. The secrets has been accessed let’s say 1000 times today, usually I expect to be no more than 100. So, there is a lot of other side… the outsides is going to be expanding on your question.
Dr. Alex Shulman: So, comparing to CMDB and other inventory management tools, we would like actually to see more about the usage of the key. If we’re bringing third party keys, how are they being used? Who has access? How frequently and…?
Vinay Puri: That’s an interesting topic as well, because that brings the topic of bring your own key as well. Right? And then…
Sean Finnerty: Yeah.
Dr. Alex Shulman: Yes. I wish we have, I think for ‘bring your own key’, we need a dedicated panel, because that’s a very heavy topic. Yes, Yoav.
Yoav Nathaniel: Well, on top of that, it’s not just for a security thought, it’s also almost like a productivity and resiliency thought. If a certain vendor has rate throttling, regardless of how many API keys you generate, being able to see which of the 5 vendor keys that you’ve generated, is how many times it’s being actually used, that allows you to narrow down, “Wow, this vendor’s malfunctioning because of this specific application that’s calling it.” Which many times, when you work with vendor products, you don’t have that level of visibility from the vendor side.
Maria Schwenger: I wanted to add something that I think it’s important. It’s building the culture around the key management, the secret management. And like building a culture of collaboration, partnership always takes time, and you have a lot of different stakeholders now in the mess, I think that, at certain points, you will realize that certain teams at your organizations are going to be more engaged, taking more driving force and pushing forward. The others are going to be a little bit followers or slower. And sometimes that creates a problem. Sometimes, we should probably use it from the positive side, kind of give more wings to the teams that are pushing forward, try to support the teams that are coming a little bit behind. That’s my experience in building this type of culture and this type of programs. That’s what helped me, so I wanted to share it.
Dr. Alex Shulman: Yes, thank you. Thank you for sharing wonderful insights. I think so many lessons learned. And I’m sure you have more, and we can continue discussing more. But thank you for sharing with everyone today. Thank you for being with us. Thank you for the wonderful questions. Looking forward to continue these discussions.
Sean Finnerty: Thanks, everyone.
Yoav Nathaniel: Thank you.
Vinay Puri: Thanks, everyone. Thanks for having us.