Skip to main content

One post tagged with "security"

View All Tags

· One min read

API gateway authentication is the practice of verifying client identity at a centralized entry point before requests reach backend services. By enforcing authentication at the gateway layer, organizations eliminate redundant auth logic across services, reduce attack surface, and gain a single enforcement point for access policies.

What is API Gateway Authentication#

In a distributed architecture, every service that exposes an endpoint must answer a fundamental question: who is making this request? Without a gateway, each service independently implements its own authentication stack. This leads to inconsistent enforcement, duplicated code, and a broader attack surface.

An API gateway centralizes this concern. It intercepts every inbound request, validates credentials against a configured identity provider or local store, and either forwards the authenticated request downstream or rejects it immediately. Broken authentication consistently ranks among the top API vulnerability categories, making centralized enforcement critical.

Centralizing authentication at the gateway layer provides three key advantages. First, it significantly reduces per-service authentication code by consolidating auth logic into a single component. Second, it creates a single audit log for every authentication event. Third, it enables credential rotation and policy changes without redeploying individual services.

Authentication Methods#

Key Auth#

Key authentication is the simplest method. The client includes a static API key in a header or query parameter. The gateway validates the key against a stored registry and maps it to a consumer identity.

Key Auth works well for server-to-server communication where transport security (TLS) is guaranteed and the client population is small. API keys remain common for machine-to-machine authentication, though their share is declining as organizations move toward token-based methods.

Apache APISIX supports Key Auth natively through its key-auth plugin. Configuration requires only defining a consumer and attaching the plugin to a route.

JWT (JSON Web Tokens)#

JWT authentication uses digitally signed tokens that carry claims about the client. The gateway validates the token signature, checks expiration, and optionally verifies audience and issuer claims. Because JWTs are self-contained, the gateway does not need to call an external service on every request.

JWTs dominate modern API authentication. The compact format and stateless verification make JWTs particularly well-suited for high-throughput gateways where microsecond-level latency matters.

APISIX implements JWT validation through its jwt-auth plugin, supporting both HS256 and RS256 algorithms with configurable claim validation.

OAuth 2.0#

OAuth 2.0 is an authorization framework that enables third-party applications to obtain limited access to an API on behalf of a resource owner. The gateway validates bearer tokens issued by an authorization server, typically by introspecting the token or verifying a JWT access token locally.

OAuth 2.0 is widely adopted across enterprises for API integrations. The framework's delegation model makes it essential for any API exposed to external developers or partner ecosystems.

OpenID Connect (OIDC)#

OpenID Connect extends OAuth 2.0 with a standardized identity layer. It adds an ID token (a JWT) that carries user identity claims alongside the OAuth 2.0 access token. The gateway can validate the ID token to confirm user identity and use the access token for authorization decisions.

OIDC is the de facto standard for single sign-on in API ecosystems. Major identity providers including Okta, Auth0, Azure AD, and Google Identity all implement OIDC. APISIX provides native OIDC support through its openid-connect plugin, which handles the full authorization code flow, token introspection, and token refresh.

mTLS (Mutual TLS)#

Mutual TLS requires both the client and server to present certificates during the TLS handshake. The gateway validates the client certificate against a trusted certificate authority, establishing strong machine identity without application-layer tokens.

mTLS adoption has surged alongside zero-trust architecture initiatives. In Kubernetes environments, mTLS between services has become increasingly common. At the gateway level, mTLS is particularly valuable for B2B integrations and internal service-to-service communication where certificate management infrastructure already exists.

HMAC Authentication#

HMAC authentication requires the client to compute a hash-based message authentication code over the request content using a shared secret. The gateway independently computes the same HMAC and compares the results. This method provides request integrity verification in addition to authentication.

HMAC is common in financial APIs and webhook verification scenarios where request tampering must be detected. AWS Signature Version 4, used across all AWS API calls, is an HMAC-based scheme processing billions of requests daily.

Comparison Table#

MethodComplexityStatefulnessBest ForToken Expiry
Key AuthLowStateless (lookup)Internal services, simple integrationsManual rotation
JWTMediumStatelessHigh-throughput APIs, mobile clientsBuilt-in (exp claim)
OAuth 2.0HighStateful (auth server)Third-party access, delegated authAccess token TTL
OIDCHighStateful (identity provider)SSO, user-facing APIsID + access token TTL
mTLSHighStateless (cert validation)Zero-trust, B2B, service meshCertificate validity period
HMACMediumStatelessFinancial APIs, webhook verificationPer-key rotation policy

Best Practices#

Layer your authentication. Use mTLS at the transport layer for service identity and JWT or OAuth 2.0 at the application layer for user identity. Defense in depth reduces the impact of any single credential compromise.

Enforce short-lived tokens. Set JWT and OAuth 2.0 access token lifetimes to 15 minutes or less for user-facing flows. Use refresh tokens to obtain new access tokens without re-authentication. Short token lifetimes limit the window of exploitation if a token is leaked.

Centralize consumer management. Define consumers at the gateway level with consistent identity attributes. Map every API key, JWT subject, and OAuth 2.0 client ID to a named consumer entity. This enables unified rate limiting, logging, and access control across authentication methods.

Validate all claims. Do not trust a JWT solely because its signature is valid. Verify the issuer (iss), audience (aud), expiration (exp), and not-before (nbf) claims. Reject tokens with unexpected or missing claims.

Log authentication events comprehensively. Record every authentication success and failure with client identity, timestamp, source IP, and the route accessed. These logs are essential for incident response and compliance audits. NIST SP 800-92 recommends retaining authentication logs for a minimum of 90 days.

How Apache APISIX Handles Authentication#

Apache APISIX provides a plugin-based authentication architecture that supports all six methods described above. Each authentication plugin runs in the gateway's request processing pipeline before the request reaches any upstream service.

APISIX's consumer abstraction ties authentication credentials to named entities. A single consumer can have multiple authentication methods attached, enabling gradual migration between methods. For example, an organization migrating from Key Auth to JWT can configure both plugins on the same consumer during the transition period.

Key plugins include:

  • key-auth: Static API key validation with header or query parameter extraction.
  • jwt-auth: JWT signature verification with configurable algorithms and claim validation.
  • openid-connect: Full OIDC flow support including authorization code, token introspection, and PKCE.

APISIX also supports chaining authentication plugins with authorization plugins such as consumer-restriction and OPA (Open Policy Agent), enabling fine-grained access control decisions after identity is established.

Performance benchmarks show APISIX processing authenticated requests with sub-millisecond overhead for Key Auth and JWT validation, and under 5ms for OIDC token introspection with a local identity provider. These numbers hold at sustained loads exceeding 10,000 requests per second on modest hardware.

FAQ#

Should I use JWT or OAuth 2.0 for my API?#

JWT and OAuth 2.0 are not mutually exclusive. OAuth 2.0 is an authorization framework that often uses JWTs as its access token format. If your API serves first-party clients only, standalone JWT authentication may suffice. If third-party developers need delegated access, implement the full OAuth 2.0 framework with JWT access tokens.

Is API key authentication secure enough for production?#

API key authentication is secure for server-to-server communication over TLS when keys are rotated regularly and scoped to specific consumers. It is not recommended for client-side applications (browsers, mobile apps) because keys cannot be kept secret on end-user devices. For any client-facing API, prefer OAuth 2.0 or OIDC.

How does mTLS differ from standard TLS at the gateway?#

Standard TLS authenticates only the server to the client. The client verifies the server's certificate, but the server accepts any client connection. mTLS adds a second handshake step where the client also presents a certificate that the server validates against a trusted CA. This provides strong machine identity for both parties and is a foundational component of zero-trust network architectures.

Can I combine multiple authentication methods on a single route?#

Yes. Apache APISIX supports configuring multiple authentication plugins on a single route. The gateway attempts each configured method in order and accepts the request if any method succeeds. This is useful during migration periods or when a route serves clients with different authentication capabilities.

· One min read

API gateway security is the practice of protecting your API infrastructure at the edge by enforcing authentication, authorization, rate limiting, and traffic filtering before requests reach backend services. A properly secured gateway reduces attack surface, prevents data breaches, and ensures compliance across every API endpoint in your organization.

Why API Gateway Security Matters#

APIs have become the primary attack vector for modern applications. According to the OWASP API Security Top 10 (2023 edition), broken object-level authorization and broken authentication remain the two most critical API vulnerabilities, affecting organizations across every industry. The explosive growth of API-first architectures has created an equally explosive growth in API-targeted attacks.

The cost of getting API security wrong is substantial, as breaches involving API vulnerabilities tend to take longer to identify and contain and carry significant financial impact. The API gateway sits at a unique vantage point: it processes every inbound request, making it the single most effective location to enforce security policies consistently.

Common API Threats#

Understanding the threat landscape is essential for building an effective defense. The following categories represent the most frequent and damaging attack patterns targeting APIs today.

Broken Object-Level Authorization (BOLA)#

BOLA attacks exploit weak authorization checks to access resources belonging to other users. An attacker modifies object identifiers in API requests (for example, changing /users/123/orders to /users/456/orders) to retrieve unauthorized data. BOLA remains one of the most exploited API vulnerability classes, particularly in organizations where API management and authorization enforcement have not kept pace with API proliferation.

Injection Attacks#

SQL injection, NoSQL injection, and command injection remain persistent threats. Attackers embed malicious payloads in query parameters, headers, or request bodies. Despite being a well-known vulnerability class, injection attacks continue to appear frequently in web application security assessments.

Broken Authentication#

Weak or improperly implemented authentication mechanisms allow attackers to assume legitimate user identities. Common failures include missing token validation, weak password policies, credential stuffing vulnerabilities, and improper session management. Credential stuffing attacks account for billions of login attempts monthly across the internet.

Excessive Data Exposure#

APIs frequently return more data than the client needs, relying on the frontend to filter sensitive fields. Attackers bypass the frontend and consume raw API responses directly, gaining access to data never intended for display. This over-exposure is especially dangerous in mobile applications where API traffic is easily intercepted.

Rate Limit Bypass#

Without proper rate limiting, attackers can launch brute-force attacks, denial-of-service campaigns, and credential enumeration at scale. Automated bot traffic constitutes a significant portion of all internet traffic, and much of it targets API endpoints specifically.

Security Layers at the Gateway#

A defense-in-depth approach applies multiple security controls at the gateway layer, each addressing a distinct category of risk.

Authentication#

The gateway should verify identity before any request reaches a backend service. Common mechanisms include JWT validation, OAuth 2.0 token introspection, API key verification, and mutual TLS (mTLS) for service-to-service communication. Centralizing authentication at the gateway eliminates the risk of inconsistent enforcement across individual services.

Authorization#

Beyond verifying identity, the gateway must enforce access control. Role-based access control (RBAC), attribute-based access control (ABAC), and scope-based authorization ensure that authenticated users can only access resources and operations they are permitted to use. Fine-grained authorization at the gateway prevents BOLA vulnerabilities at scale.

Rate Limiting and Throttling#

Rate limiting protects backend services from abuse and ensures fair resource allocation. Effective rate limiting operates at multiple granularities: per consumer, per route, per IP address, and globally. A substantial share of traffic on the average website comes from bots, and rate limiting is the first line of defense against automated abuse.

IP Restriction#

IP allowlists and denylists provide coarse-grained access control. While not sufficient as a sole security measure, IP restriction is valuable for restricting administrative endpoints, limiting partner API access to known address ranges, and blocking traffic from regions associated with attack activity.

WAF and CORS#

A Web Application Firewall (WAF) at the gateway layer inspects request payloads for known attack patterns. CORS policies prevent unauthorized cross-origin requests from browser-based clients. Together, they address both server-side injection attacks and client-side cross-origin abuse.

TLS Termination#

TLS termination at the gateway ensures that all client-to-gateway traffic is encrypted. The gateway handles certificate management, cipher suite configuration, and protocol version enforcement, relieving backend services of this operational burden. The vast majority of web traffic now uses HTTPS, and TLS is considered a baseline requirement for any production API.

Request Validation#

Schema-based request validation rejects malformed or oversized payloads before they reach backend services. Validating request structure, data types, and content length at the gateway prevents injection attacks and reduces the attack surface of downstream services.

Zero-Trust API Architecture#

Zero-trust architecture assumes that no request is inherently trustworthy, regardless of its origin. Every API call must be authenticated, authorized, and validated, whether it arrives from the public internet, an internal service, or a trusted partner.

At the gateway layer, zero-trust principles translate into several concrete practices. Every request carries verifiable identity credentials. Authorization is evaluated per request rather than per session. Network location (internal vs. external) does not confer implicit trust. All traffic is encrypted, including east-west service-to-service communication. The API gateway enables zero-trust by serving as a policy enforcement point. It validates tokens, checks permissions, and applies security policies uniformly across all traffic, creating a consistent security boundary regardless of the underlying network topology.

Security Best Practices#

The following practices represent a comprehensive approach to API gateway security that organizations should adopt incrementally based on risk profile.

  1. Enforce authentication on every endpoint. No API route should be accessible without verified identity. Use JWTs with short expiration times and validate signatures on every request.

  2. Implement least-privilege authorization. Grant the minimum permissions required for each consumer. Default to deny and require explicit grants for sensitive operations.

  3. Apply rate limiting at multiple levels. Configure per-consumer, per-route, and global rate limits. Use sliding window algorithms to prevent burst abuse while accommodating legitimate traffic spikes.

  4. Validate all request inputs. Enforce request schema validation at the gateway. Reject payloads that exceed expected sizes, contain unexpected fields, or fail type checks.

  5. Use mutual TLS for service-to-service calls. Encrypt and authenticate all internal traffic. Rotate certificates automatically and enforce certificate validation on every connection.

  6. Enable WAF rules for known attack patterns. Deploy rulesets targeting SQL injection, XSS, and command injection. Update rules regularly to address emerging attack vectors.

  7. Log and audit all security events. Capture authentication failures, authorization denials, rate limit triggers, and WAF blocks. Feed security logs into a SIEM for correlation and alerting.

  8. Rotate credentials and secrets regularly. Automate API key rotation, certificate renewal, and token signing key rotation. Never embed secrets in client-side code or version control.

  9. Restrict administrative API access. Protect management APIs with strong authentication, IP restrictions, and separate credentials from data-plane APIs.

  10. Conduct regular security assessments. Perform API-specific penetration testing, not just general web application assessments. The OWASP API Security Testing Guide provides a structured methodology.

How Apache APISIX Secures APIs#

Apache APISIX provides a comprehensive set of security plugins that implement each layer of the defense-in-depth model described above.

For IP-based access control, the ip-restriction plugin supports allowlists and denylists at the route level, enabling fine-grained control over which addresses can reach specific endpoints.

Cross-origin resource sharing is managed through the CORS plugin, which configures allowed origins, methods, and headers to prevent unauthorized cross-origin requests from browser clients.

CSRF protection is available through the CSRF plugin, which generates and validates CSRF tokens to prevent cross-site request forgery attacks on state-changing API operations.

For mutual TLS, APISIX supports mTLS configuration for both client-to-gateway and gateway-to-upstream connections, ensuring encrypted and mutually authenticated communication at every hop.

APISIX also supports JWT authentication, key authentication, OpenID Connect, rate limiting with multiple algorithms, and request body validation. The plugin architecture enables security policies to be composed per route, allowing teams to apply exactly the controls each endpoint requires without over- or under-securing traffic.

FAQ#

What is the difference between API gateway security and API security?#

API security is the broad discipline of protecting APIs across their entire lifecycle, including design, development, testing, and runtime. API gateway security specifically refers to the security controls enforced at the gateway layer during runtime, such as authentication, rate limiting, and input validation. The gateway is one component of a comprehensive API security strategy, not a replacement for secure coding practices and security testing.

Should I terminate TLS at the API gateway or at the backend service?#

Terminate TLS at the gateway for client-facing connections. This centralizes certificate management and offloads cryptographic processing from backend services. For traffic between the gateway and upstream services, use mTLS to maintain encryption and mutual authentication throughout the request path. This approach balances operational simplicity with end-to-end security.

How many rate limiting layers should an API gateway enforce?#

Apply at least three layers: a global rate limit to protect overall infrastructure capacity, a per-consumer limit to prevent any single client from monopolizing resources, and per-route limits for endpoints with expensive backend operations. Use sliding window or leaky bucket algorithms rather than fixed windows to provide smoother throttling behavior and prevent burst abuse at window boundaries.

· One min read

Mutual TLS (mTLS) is a security protocol where both the client and server authenticate each other using X.509 certificates during the TLS handshake. Unlike standard TLS, which only verifies the server's identity, mTLS ensures that both parties prove they are who they claim to be before any application data is exchanged.

Why Mutual TLS Matters#

Standard TLS protects the vast majority of internet traffic today. The overwhelming majority of web traffic now uses HTTPS. However, standard TLS only solves half the authentication problem: clients verify that the server holds a valid certificate, but servers have no cryptographic assurance about the client's identity. They rely on application-layer mechanisms like API keys, tokens, or passwords instead.

This gap becomes critical in zero-trust architectures, service-to-service communication, and regulated environments where network-level identity verification is required. mTLS closes this gap by making identity verification bilateral and cryptographic.

mTLS vs Standard TLS#

AspectStandard TLSMutual TLS (mTLS)
Server authenticatedYesYes
Client authenticatedNo (application layer)Yes (certificate)
Client certificate requiredNoYes
Certificate management complexityLowHigh
Typical use casePublic websites, APIsInternal services, zero-trust, IoT
Identity assurance levelServer onlyBoth endpoints
Performance overheadBaseline~5-10% additional handshake time
Common in browsersYesRare (except enterprise)

mTLS has become the predominant service-to-service authentication mechanism in zero-trust network access (ZTNA) implementations, reflecting growing recognition that network perimeter-based security is insufficient for distributed architectures.

How the mTLS Handshake Works#

The mTLS handshake extends the standard TLS 1.3 handshake with additional steps for client certificate exchange. Here is the full sequence:

Step 1: Client Hello. The client initiates the connection by sending supported cipher suites, TLS version, and a random value to the server. This step is identical to standard TLS.

Step 2: Server Hello and Server Certificate. The server responds with its chosen cipher suite, its own random value, and its X.509 certificate. The server also sends a CertificateRequest message, signaling that the client must present a certificate. In standard TLS, this CertificateRequest is absent.

Step 3: Client Verifies Server Certificate. The client validates the server's certificate against its trust store, checking the certificate chain, expiration, revocation status (via CRL or OCSP), and that the subject matches the expected server identity.

Step 4: Client Certificate Submission. The client sends its own X.509 certificate to the server along with a CertificateVerify message containing a digital signature over the handshake transcript, proving possession of the private key corresponding to the certificate.

Step 5: Server Verifies Client Certificate. The server validates the client certificate against its configured Certificate Authority (CA) trust store, checks the certificate chain, verifies the CertificateVerify signature, and optionally checks revocation status. If verification fails, the server terminates the connection immediately.

Step 6: Secure Channel Established. Both parties derive session keys from the shared secret. All subsequent communication is encrypted and authenticated in both directions.

The entire handshake adds approximately 1-2 milliseconds of latency compared to standard TLS, depending on certificate chain depth and revocation checking methods.

Use Cases for Mutual TLS#

Zero-Trust Architecture#

Zero-trust security models operate on the principle of "never trust, always verify." Every service must authenticate cryptographically before communicating, regardless of network location. mTLS provides the transport-layer foundation for this model. The industry trend is strongly toward zero-trust for new network access deployments, with mTLS as the predominant service identity mechanism.

Microservices Communication#

In microservices architectures, dozens or hundreds of services communicate over internal networks. Without mTLS, a compromised service can impersonate any other service on the network. mTLS ensures that Service A can only communicate with Service B if both hold certificates signed by a trusted CA. Service meshes like Istio and Linkerd automate mTLS certificate issuance and rotation for every service pod, making deployment tractable at scale.

IoT Device Authentication#

IoT devices operate in physically untrusted environments where API keys or passwords can be extracted from device firmware. mTLS binds device identity to a hardware-backed certificate, making impersonation significantly harder. Certificate-based authentication is widely adopted across IoT devices, with mTLS adoption growing rapidly in industrial and healthcare IoT deployments.

API Security and Partner Integration#

APIs exposed to partners or regulated industries often require stronger authentication than API keys provide. mTLS ensures that only clients holding a certificate issued by the API provider's CA can establish a connection, providing defense-in-depth before any application-layer authentication occurs. Financial services APIs governed by Open Banking regulations in the EU, UK, and Australia mandate mTLS for third-party provider connections.

Challenges of Implementing mTLS#

Certificate Lifecycle Management#

Every client and server in an mTLS deployment needs a valid certificate. For an organization running 500 microservices with 3 replicas each, that means managing 1,500 certificates with their own issuance, renewal, and revocation cycles. Without automation, this becomes operationally unsustainable. Tools like cert-manager (for Kubernetes), HashiCorp Vault, and SPIFFE/SPIRE address this by automating certificate lifecycle operations.

Certificate-related outages are common in organizations managing large certificate inventories, and remediation can be costly. Automated rotation is not optional for production mTLS deployments.

Certificate Rotation#

Short-lived certificates (hours or days) reduce the blast radius of a compromised key but increase rotation frequency. Long-lived certificates (months or years) reduce operational churn but increase exposure time if compromised. The industry trend moves toward short-lived certificates: SPIFFE recommends certificate lifetimes of 1 hour for workload identities, with automated rotation handled by the SPIRE agent.

Performance Considerations#

mTLS adds computational overhead from asymmetric cryptography during the handshake and certificate validation. For services handling thousands of new connections per second, this overhead can be measurable. Connection pooling and keep-alive headers amortize the handshake cost across many requests. TLS session resumption (via session tickets or pre-shared keys) eliminates the full handshake on reconnection, reducing the per-request cost to near zero for long-lived connections.

Debugging and Observability#

When mTLS connections fail, diagnosing the cause is harder than debugging standard TLS failures. Common failure modes include expired certificates, CA trust store mismatches, certificate revocation, and clock skew between endpoints. Structured logging of TLS handshake events, certificate serial numbers, and validation errors is essential for operational mTLS deployments.

How to Configure mTLS in Apache APISIX#

Apache APISIX supports mTLS at both the edge (between clients and APISIX) and internally (between APISIX and upstream services). The configuration uses APISIX's SSL resource and route-level settings.

Client-to-Gateway mTLS#

To require client certificates for incoming connections, configure an SSL resource with the CA certificate that should be trusted for client authentication. APISIX will reject any client that does not present a certificate signed by the specified CA. See the mTLS documentation for the full SSL resource schema and configuration examples.

Gateway-to-Upstream mTLS#

When upstream services require mTLS, configure the upstream resource with the client certificate and key that APISIX should present. This ensures APISIX authenticates itself to backend services, maintaining the zero-trust chain from edge to origin. The upstream TLS configuration section covers the required fields.

Per-Route mTLS Policies#

APISIX allows different mTLS policies per route, enabling gradual rollout. Internal admin APIs can require mTLS immediately while public-facing routes continue using standard TLS with application-layer authentication. This granularity is configured through the route's ssl and upstream settings.

The certificate management guide covers integration with cert-manager and external CA providers for automated certificate rotation within APISIX deployments.

mTLS Best Practices#

  1. Automate certificate lifecycle. Never rely on manual certificate issuance or renewal for production mTLS. Use cert-manager, Vault, or SPIRE.

  2. Use short-lived certificates. Target lifetimes of 24 hours or less for workload certificates. Rotate automatically before expiration.

  3. Separate CAs by trust domain. Do not use the same CA for internal service certificates and external partner certificates. Maintain distinct trust hierarchies.

  4. Monitor certificate expiration. Set alerting thresholds at 7 days, 3 days, and 1 day before expiration. Track certificate inventory centrally.

  5. Enable OCSP stapling. Reduce certificate validation latency by stapling OCSP responses at the server rather than requiring clients to contact the CA's OCSP responder.

FAQ#

What happens if a client certificate expires during an active mTLS connection?#

Existing connections continue functioning until they are closed because TLS authentication occurs during the handshake, not continuously. However, any new connection attempt with the expired certificate will fail. This is why short-lived certificates combined with connection draining during rotation are important: they ensure that stale credentials are phased out promptly without disrupting in-flight requests.

Is mTLS the same as two-way SSL?#

Yes. "Two-way SSL," "mutual SSL," and "mutual TLS" all describe the same mechanism: both endpoints present and verify certificates. The terminology "mutual TLS" is preferred in modern usage because TLS superseded SSL over two decades ago, and all current implementations use TLS 1.2 or TLS 1.3 rather than any SSL version.

Does mTLS replace the need for API keys or OAuth tokens?#

No. mTLS authenticates the transport-layer identity (which machine or service is connecting), while API keys and OAuth tokens authenticate the application-layer identity (which user, application, or tenant is making the request). In a defense-in-depth strategy, mTLS and application-layer authentication serve complementary roles. mTLS ensures only authorized services can reach the endpoint; tokens and keys determine what those services are allowed to do.

How does mTLS perform at scale in Kubernetes?#

In Kubernetes environments with service meshes, mTLS scales well because certificate issuance and rotation are fully automated by the mesh control plane. Istio, for example, issues and rotates certificates for every pod automatically using its built-in CA. The performance impact is primarily on new connections (the handshake), which is amortized by connection pooling. Organizations running mTLS across 10,000+ pods report negligible steady-state performance impact, with the main operational cost being control plane resource consumption for certificate management.