May 14, 2026
Posted by Joyce Ling
In 2024, attackers breached the US Treasury Department not by exploiting a zero-day vulnerability or breaking encryption, they used a leaked API key for BeyondTrust’s authentication platform. One exposed static credential bypassed millions of dollars in security investments and gave attackers direct access to Treasury systems. The breach is not an anomaly. GitGuardian’s 2026 State of Secrets Sprawl report found that nearly 29 million new secrets were exposed on public GitHub in 2025 alone, a 34% year-over-year increase and the largest single-year jump ever recorded.
API keys are the most pervasive form of machine credential, and they are structurally insecure: static, long-lived, bearer-based, and too often stored in places they should never be. This guide covers what API key management actually requires, why the problem is accelerating with AI agent adoption, and what modern alternatives look like when the goal is to eliminate the risk rather than just manage it.
| Key Takeaways • API keys are static bearer credentials, whoever holds the key gets access. They have no native expiry, no user binding, and no cryptographic proof of identity. • 28,649,024 new secrets were exposed on public GitHub in 2025 (GitGuardian 2026 State of Secrets Sprawl), a 34% YoY increase, the largest jump ever recorded. 70% of leaked secrets from 2022 are still active. • AI agents amplify API key risk dramatically: they make more API calls, pass keys through more systems, and expose them in logs, prompts, and MCP config files. • Best practices: least privilege, rotation every 30–90 days (or event-driven), vault-backed storage, never in source code or environment variables in production. • For service-to-service authentication, dynamic secrets, OAuth 2.0 client credentials, and workload identity (SPIFFE/SPIRE) eliminate the static key problem at the architecture level. • Akeyless replaces static API keys with dynamic secrets and zero-knowledge encryption, keys are generated on demand, expire automatically, and are never stored in plaintext. |
What Are API Keys?
An API key is a static string credential, typically 20–50 alphanumeric characters, that a client sends with each request to identify and authenticate itself to an API. When a request arrives, the API looks up the key in its registry, verifies it is valid and active, and then grants access based on whatever permissions that key was issued with.
API keys are transmitted in one of three ways: as an HTTP header (x-api-key: your-key), as a query parameter (?api_key=your-key), or in the request body. The header approach is the safest of the three, query parameters appear in server logs, browser history, and referrer strings, making accidental exposure significantly more likely.
Common use cases include authenticating services to third-party APIs (payment processors, mapping services, AI APIs, analytics platforms), controlling access to internal microservices, and tracking usage for billing and rate limiting. They are popular because they are simple: one string, zero infrastructure. That simplicity is also their greatest liability.
Why API Keys Are Inherently Insecure
API keys have five structural properties that make them a weak credential type regardless of how carefully they are managed:
- Static and long-lived: Unlike session tokens or JWTs, API keys do not expire by default. An issued key stays valid until explicitly revoked, which, in practice, often means it stays valid indefinitely. GitGuardian found that 70% of secrets leaked in 2022 were still active in 2025, three years after exposure. Each one is an open door.
- Bearer credentials with no identity binding: API keys authenticate the key, not the entity holding it. There is no cryptographic proof that the caller is who it claims to be. If the key is stolen, the attacker is indistinguishable from the legitimate caller.
- No built-in scope enforcement: Many API keys are issued with broader permissions than the requesting service needs. A key created for read access to one endpoint often grants more if the issuing team did not deliberately restrict it.
- Plaintext storage risk: API keys must be stored somewhere readable, environment variables, config files, CI/CD pipeline secrets, or code. Each location is a potential exposure vector. Google’s own documentation recommends migrating away from API keys to IAM policies and short-lived service account credentials for this reason.
- No native revocation signalling: When a key is compromised, the only remediation is revocation, but there is no universal standard for notifying dependent services. Revocation breaks things. The operational cost of revoking a widely used key creates pressure not to rotate, which extends exposure.
| 29 million new secrets exposed on public GitHub in 2025, a 34% YoY increase, the largest jump ever recorded.GitGuardian State of Secrets Sprawl 2026 |
Real-world breaches confirm this is not theoretical. The 2024 US Treasury breach via a leaked BeyondTrust API key. The exposure of New York Times source code. Sisense’s credential leak. In each case, a single static key was the entry point. The OWASP API Security Top 10 consistently flags broken authentication and excessive data exposure as the leading API risks, and static API keys without proper lifecycle management contribute to both.
The Growing Risk: API Keys in AI Agent Workflows
The arrival of AI agents at scale has created a new and considerably worse version of the API key problem. Traditional applications make predictable API calls from known, controlled locations. AI agents are different: they operate autonomously, integrate with dozens of external services, generate their own API calls based on runtime context, and run inside environments with reduced visibility.
The specific attack surfaces AI agents introduce:
- Prompt injection and credential exposure: AI agents receive instructions via prompts and tool outputs. A manipulated prompt, embedded in a web page the agent summarises, a Slack message it reads, or a document it processes, can instruct the agent to reveal credentials from its context window or environment. OX Security has documented prompt injection attacks that direct agents to reveal API keys stored in logs
- Log exposure at scale: AI agents make significantly more API calls than traditional applications. More calls mean more log entries containing request headers and parameters, and more opportunities for API keys to appear in plaintext in log streams that are often broadly accessible inside organisations.
- MCP configuration file sprawl: Model Context Protocol, the emerging standard for connecting LLMs to external tools, introduced a new class of credential exposure. GitGuardian found 24,008 unique secrets in MCP-related configuration files on public GitHub in 2025, with 2,117 confirmed valid. AI integration tooling is spreading the credential problem into new categories of artifact.
- Supply chain risk from AI tool dependencies: AI agent stacks often include multiple open-source libraries, vector database clients, and plugins. Each dependency that can read environment variables is a potential supply chain exfiltration vector, demonstrated by malicious PyPI packages in 2024 that silently exfiltrated environment variables during import.
| AI service credential leaks increased 81% year-over-year in 2025. Secrets tied to MCP config files: 24,008 exposed, 2,117 confirmed valid.GitGuardian State of Secrets Sprawl 2026 |
The answer for AI agents is not better API key hygiene, it is credential elimination. AI agents should use short-lived tokens derived from workload identity, not static API keys injected at startup. When an agent needs to call an external API, the request for credentials should go through a secrets proxy or vault that issues a time-limited token, logs the access event, and never exposes the underlying secret to the agent’s runtime context.
How API Keys Work in Authentication and Authorization
API keys are the simplest form of API authentication, and understanding their role in the broader authentication landscape is important context for evaluating when they are appropriate, and when they are not.
Authentication via API key answers “is this a known caller?” The API looks up the key in its store and confirms it exists and is active. It does not verify who is making the request or whether the caller’s environment has been compromised.
Authorization via API key typically relies on per-key permission sets assigned at issuance time. Unlike OAuth 2.0 scopes, which can be dynamically requested and granted for specific operations, API key permissions are generally coarse and static.
How API keys compare to stronger authentication patterns:
| Credential Type | Expiry | Identity Binding | Revocability | Best For |
|---|---|---|---|---|
| API Key | None by default | Key only (bearer) | Manual revocation | Simple public APIs, low-sensitivity internal tools |
| JWT (JSON Web Token) | Built-in (exp claim) | Signed identity claims | Token expiry + revocation lists | Stateless service-to-service auth where expiry is enforced |
| OAuth 2.0 | Short-lived access tokens | User or client identity | Token revocation + refresh management | User-delegated access, third-party integrations |
| mTLS (Mutual TLS) | Certificate validity period | Cryptographic mutual auth | Certificate revocation (CRL/OCSP) | High-assurance service-to-service, zero-trust environments |
| Workload Identity / SPIFFE | Short-lived SVID | Cryptographic workload identity | Automatic expiry + rotation | Cloud-native, Kubernetes, AI agent environments |
For most modern service-to-service authentication scenarios, especially in cloud-native and AI agent environments, API keys are the weakest option. The migration path runs from API keys → short-lived JWTs → OAuth 2.0 client credentials → workload identity, depending on the sophistication of the environment and the sensitivity of the data being protected.
The Challenges of API Key Management at Scale
The operational reality of managing API keys across a growing technology estate is significantly harder than issuing them:
- Secrets sprawl: As applications and services multiply, API keys proliferate across environment variables, config files, CI/CD pipelines, Slack messages, and developer laptops. GitGuardian’s 2025 research found that 28% of credential exposure incidents in organisations originated entirely outside source code, in collaboration tools like Slack, Jira, and Confluence. If you are only scanning your code, you are missing more than a quarter of your exposure.
- Rotation difficulty: Every active API key needs to be rotated regularly. Each rotation requires updating every service or pipeline that uses the key, coordinating the cutover to avoid service interruption, and confirming the old key is revoked. At scale, hundreds of keys across dozens of services, this becomes an operational burden that leads teams to delay rotation, often indefinitely.
- Lack of fine-grained control: Many API keys carry broader permissions than required, a direct violation of least privilege. The consequence is that a single compromised key potentially exposes far more than the attacker needs, enabling lateral movement across services.
- Audit gaps: Native API key usage logs, where they exist, are often siloed in the issuing platform and not correlated with the organisation’s SIEM or central audit infrastructure. This makes it difficult to detect unusual access patterns, identify a compromised key in use, or produce a complete access trail for compliance.
- Non-human identity sprawl: Non-human identities, API keys, service accounts, automation tokens, now vastly outnumber human users in most organisations, yet they typically lack the lifecycle management discipline applied to human accounts. No automatic expiry, no regular review, no offboarding process when a service is decommissioned.
API Key Management Best Practices
These practices address the structural weaknesses of API keys for organisations that must use them. Each should be treated as a baseline, not a menu:
- ✅ Apply least privilege: Issue each API key only the permissions it needs for its specific function. Restrict to specific endpoints, operations, or data scopes. A key used to read analytics data should not have write access to your user database.
- ✅ Set expiry and rotate regularly: Apply expiry at issuance, 30 days for high-security environments, 90 days for standard. Automate rotation so that generating a new key, distributing it to dependent services, and revoking the old one happens without manual intervention. Event-driven rotation (on deployment, on configuration change, on anomaly detection) is more secure than calendar-based rotation alone.
- ✅ Store in a centralised secrets manager: Never store API keys in source code, config files committed to version control, or plaintext environment variables. Use a secrets management platform, Akeyless, HashiCorp Vault, AWS Secrets Manager, or equivalent, that provides encryption at rest, access control, audit logging, and automated rotation.
- ✅ Use separate keys per environment and service: One key per service per environment (development, staging, production). This limits blast radius if a key is compromised, allows targeted revocation without disrupting other services, and makes usage attribution unambiguous in audit logs.
- ✅ Monitor usage and alert on anomalies: Track request volume, error rates, geographic patterns, and usage timing for each key. A credential stuffing attack stopped in 2024 within seven minutes because monitoring detected an 812% spike in requests from unfamiliar regions. Without that monitoring, it would have run unchecked. Set automated alerts for unusual patterns and route them to on-call, not a shared mailbox.
- ✅ Scan continuously for exposure: Run secrets scanning on every commit (using tools like GitGuardian, truffleHog, or native GitHub Secret Protection) and extend scanning to Docker image layers, CI/CD logs, and collaboration tools. The 2026 GitGuardian report found that 18% of scanned Docker images contained secrets, 15% of which were still valid.
- ✅ Never use query parameters to pass API keys: Keys in URL query strings appear in server logs, browser history, and referrer headers. Use the x-api-key HTTP header or a request body parameter instead.
API Key Management Tools
The tools for managing API keys fall into three categories: vault-based secrets managers, cloud-native services, and scanning and detection tools. Most production environments need at least one from each category.
| Tool | Category | Key Strengths | Limitations |
|---|---|---|---|
| Akeyless | Vault / Secrets Manager | SaaS, zero-knowledge (DFC), dynamic secrets, auto-rotation, audit trails, no infrastructure to operate | SaaS model; requires network access to Akeyless GW for private deployments |
| HashiCorp Vault | Vault / Secrets Manager | Open-source, highly flexible, strong community, dynamic secrets, policy engine | Requires infrastructure to operate and maintain; operational complexity at scale |
| AWS Secrets Manager | Cloud-Native | Native AWS integration, automated rotation, tight IAM binding | AWS-only; per-secret pricing increases with scale; not provider-agnostic |
| GCP Secret Manager | Cloud-Native | Native GCP integration, IAM-based access control, versioning | GCP-only; lacks dynamic secret generation |
| GitGuardian | Scanning / Detection | Industry-leading secrets detection, GitHub integration, 400+ credential detectors | Detection only; does not replace a secrets manager; reactive rather than preventive |
| truffleHog | Scanning / Detection | Open-source, pre-commit hooks, CI/CD integration | Detection only; tuning required to reduce false positives |
For most organisations, the right architecture combines a centralised vault (Akeyless or equivalent) for storage and issuance, plus a scanning tool layered into the CI/CD pipeline and pre-commit hooks for continuous detection of accidental exposure. Cloud-native services are reasonable for single-cloud environments but introduce lock-in and per-secret cost structures that scale poorly.
Moving Beyond API Keys: Secrets Management Alternatives
For service-to-service authentication in cloud-native and AI-agent environments, the goal should not be to manage API keys better, it should be to eliminate them. Three patterns achieve this:
- Dynamic secrets: Instead of a static, long-lived API key, the secrets manager generates a short-lived, unique credential on demand, valid for the duration of a single session or operation, then automatically expired. An attacker who intercepts a dynamic secret gets a credential that is already invalid by the time they try to use it. Akeyless supports dynamic secrets for databases, cloud providers, and custom endpoints via its cryptographic key management infrastructure.
- OAuth 2.0 Client Credentials Flow: Designed for machine-to-machine authentication, the client credentials flow issues short-lived access tokens scoped to specific resources. The client authenticates with a client ID and secret (which can itself be short-lived and stored in a vault), receives a time-limited token, and uses it for the duration of the session. No static API key ever touches the application environment at runtime.
- Workload Identity (SPIFFE/SPIRE): The most secure pattern for cloud-native and AI agent environments. SPIFFE (Secure Production Identity Framework For Everyone) assigns cryptographic identities to workloads, pods, containers, functions, and SPIRE manages the issuance and rotation of short-lived X.509 certificates proving that identity. Applications authenticate using the certificate, not a stored secret. There is no API key to steal because there is no API key.
Google Cloud’s own guidance is instructive: “plan to migrate to more secure alternatives such as IAM policies and short-lived service account credentials, following least-privilege security practices”, even Google recommends moving away from API keys in production environments.
How Akeyless Replaces and Secures API Keys
Akeyless is built on the premise that the best API key is one that does not exist, replaced by a dynamic, short-lived credential generated on demand, used once, and automatically expired. Where static keys are unavoidable, Akeyless stores, rotates, and audits them with zero-knowledge encryption so that even Akeyless cannot read the values it manages.
- Dynamic secrets: Akeyless generates on-demand, time-limited credentials for databases, cloud providers, and services. The credential is valid for the duration of the session and then automatically revoked. Applications request a credential from Akeyless at runtime and never hold a static key.
- Zero-knowledge architecture (DFC): Secrets stored in Akeyless are protected using Distributed Fragments Cryptography. Encryption keys are split into fragments held in different locations and never assembled in any single location, including inside Akeyless. A breach of any single fragment yields nothing usable.
- Automated rotation: Akeyless supports automated rotation policies for static secrets that cannot yet be replaced with dynamic alternatives. Rotation is triggered on schedule or by event (deployment, anomaly detection, compliance requirement), without manual intervention.
- Unified audit trail: Every credential issuance, access event, and rotation generates an immutable audit log. For organisations under PCI DSS, SOC 2, or ISO 27001 scope, this is the compliance paper trail that ad-hoc API key management cannot produce.
- AI agent and non-human identity support: Akeyless’s workload identity integration means AI agents authenticate via platform identity, AWS IAM, GCP workload identity, Kubernetes service accounts, rather than holding static API keys. The agent never has a credential to expose.
Conclusion
API keys are a legacy authentication pattern that the industry has not yet fully outgrown. They are simple, which is why they proliferated; they are static bearer credentials with no expiry, no identity binding, and no native rotation mechanism, which is why they are consistently the entry point for breaches that should have been preventable.
The trajectory is clear: effective API key management requires moving from passive storage to active lifecycle control, least privilege, automated rotation, centralised auditing, and continuous scanning. And for every new service or AI agent being built today, the more durable question is whether API keys are needed at all, or whether dynamic credentials and workload identity can eliminate the problem at the root.
Akeyless offers both paths: better management for the static keys that exist today, and a migration route to dynamic secrets for the systems being built tomorrow.
| Replace static API keys with Akeyless dynamic secretsAkeyless generates short-lived, on-demand credentials for every service, no static keys to rotate, lose, or get breached. Zero-knowledge encryption means even Akeyless cannot read what it stores.Book a custom demo or start free today. |
Frequently Asked Questions
What is the safest way to store API keys?
Never store API keys in source code, config files committed to version control, or plaintext environment variables. Use a centralised secrets manager, Akeyless, HashiCorp Vault, or a cloud-native equivalent, that provides encryption at rest, access-controlled retrieval, and audit logging. At runtime, applications should request credentials from the vault rather than reading them from a static location. If you must use environment variables as an injection mechanism, ensure they are sourced from a vault at deploy time, not committed anywhere.
How often should API keys be rotated?
Rotate every 30 days for high-security environments, every 90 days for standard. Those are calendar-based minimums. The more durable rule is event-driven rotation: rotate immediately on any suspected compromise, on each deployment, on configuration changes affecting the key’s scope, or when anomaly monitoring surfaces unusual usage. Automated rotation that coordinates the cutover between old and new keys across all dependent services is significantly safer than manual rotation, which gets skipped under operational pressure.
Why are API keys especially dangerous for AI agents?
AI agents make more API calls across more services than traditional applications, passing keys through more systems and generating more log entries where keys can appear in plaintext. They are also vulnerable to prompt injection, a manipulated input can instruct an agent to reveal credentials from its context window or environment. The deeper problem is architectural: AI agents with embedded static API keys are a large, mobile attack surface. The solution is not better key hygiene but credential elimination, workload identity, OAuth 2.0 client credentials, or dynamic secrets that agents request at runtime and that expire immediately after use.
What are the best API key management tools available?
The answer depends on your use case. For centralised storage, rotation, and dynamic secrets, Akeyless (SaaS, zero-knowledge) and HashiCorp Vault (self-hosted, open-source) are the leading options. AWS Secrets Manager and GCP Secret Manager are strong choices for single-cloud environments. For scanning and detection, finding keys that have already leaked, GitGuardian and truffleHog are the most widely adopted tools. Most production environments need both: a vault for active management and a scanner for continuous detection.
What should I use instead of API keys for service-to-service authentication?
Prefer dynamic credentials and workload identity wherever possible. For short-term migration, OAuth 2.0 client credentials flow provides short-lived access tokens with defined scopes. For cloud-native environments, managed workload identity (AWS IAM roles for service accounts, GCP Workload Identity, Azure Managed Identity) eliminates static credentials entirely, the platform handles authentication. For the most rigorous zero-trust environments, SPIFFE/SPIRE assigns cryptographic identities to every workload, with certificates rotated automatically and secrets never stored anywhere.