Production Infrastructure

TLS Termination: Where to Terminate HTTPS and Why

TLS termination at the load balancer vs reverse proxy vs application — performance implications, certificate management, and end-to-end encryption with mTLS.

What Is TLS Termination?

TLS termination is the process of decrypting an HTTPS connection at a specific point in your infrastructure and forwarding the decrypted traffic onwards. Every HTTPS connection must be terminated somewhere — the question is where.

Client ──HTTPS──→ [Termination Point] ──HTTP──→ Backend
                   (decrypts TLS here)

The termination point handles the TLS handshake, certificate validation, and cipher negotiation. Everything beyond it travels as plain HTTP within your private network.

Termination at the Load Balancer

AWS ALB, Cloudflare, GCP Load Balancer handle TLS termination before traffic reaches your servers.

Users ──HTTPS──→ AWS ALB ──HTTP──→ EC2 instances
                 (TLS ends here)    (plain HTTP)

Advantages

  • Managed certificates: AWS Certificate Manager provisions and renews automatically
  • No certificate management on instances: no certbot, no cron jobs
  • SNI routing: single ALB handles multiple domains with one IP
  • HTTP/2 multiplexing: ALB speaks HTTP/2 to clients, HTTP/1.1 to backends
  • Hardware acceleration: load balancer hardware does TLS crypto at scale

AWS ALB Certificate Configuration

# Request certificate in ACM
aws acm request-certificate \
    --domain-name example.com \
    --subject-alternative-names '*.example.com' \
    --validation-method DNS

# Attach to ALB listener
aws elbv2 create-listener \
    --load-balancer-arn $ALB_ARN \
    --protocol HTTPS \
    --port 443 \
    --certificates CertificateArn=$CERT_ARN \
    --ssl-policy ELBSecurityPolicy-TLS13-1-2-2021-06 \
    --default-actions Type=forward,TargetGroupArn=$TG_ARN

Use ELBSecurityPolicy-TLS13-1-2-2021-06 for modern TLS — it requires TLS 1.2+ and prefers TLS 1.3, eliminating weak cipher suites.

Forwarding the Original Protocol

Backends need to know the original connection was HTTPS to generate correct redirect URLs and set secure cookies:

# Nginx behind ALB — trust X-Forwarded-Proto from ALB
server {
    listen 80;
    # ALB sends this header
    real_ip_header X-Forwarded-For;
    set_real_ip_from 10.0.0.0/8;  # ALB IP range
}
# Django — trust ALB's X-Forwarded-Proto header
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

Termination at the Reverse Proxy

Nginx or Caddy running on your own servers handles TLS, forwarding plain HTTP to local application processes:

Users ──HTTPS──→ Nginx:443 ──HTTP──→ Gunicorn:8000
                 (TLS ends)   (localhost)

Nginx with Let's Encrypt

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate     /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # Modern TLS configuration
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers off;  # Let client choose TLS 1.3

    # Session resumption — reduces handshake cost for returning clients
    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;  # Prefer session cache (PFS-safe)

    # OCSP stapling — avoid client round-trip to CA
    ssl_stapling        on;
    ssl_stapling_verify on;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;
    resolver 1.1.1.1 8.8.8.8;

    location / {
        proxy_pass http://127.0.0.1:8000;
    }
}

Certbot Renewal

# Install certbot and obtain certificate
sudo certbot --nginx -d example.com -d www.example.com

# Test auto-renewal
sudo certbot renew --dry-run

# Certbot installs a systemd timer for renewal:
sudo systemctl status certbot.timer

End-to-End Encryption

For compliance or zero-trust architectures, you may need TLS all the way from load balancer to backend (re-encryption), or mutual TLS (mTLS) between services.

Re-encryption to Origin

# Load balancer forwards HTTPS to backend (re-encrypts)
upstream backend {
    server backend.internal:443;
}

location / {
    proxy_pass https://backend;
    proxy_ssl_certificate     /etc/ssl/client.crt;
    proxy_ssl_certificate_key /etc/ssl/client.key;
    proxy_ssl_verify          on;
    proxy_ssl_trusted_certificate /etc/ssl/ca-bundle.crt;
}

Mutual TLS (mTLS) Between Services

mTLS requires both client and server to present certificates — preventing any unauthorized service from calling your internal APIs:

# Python requests with client certificate
import requests

response = requests.get(
    'https://internal-service.example.com/api/data',
    cert=('/etc/ssl/client.crt', '/etc/ssl/client.key'),
    verify='/etc/ssl/ca-bundle.crt'
)
# Kubernetes: Istio auto-injects mTLS via sidecar proxies
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
  name: default
  namespace: production
spec:
  mtls:
    mode: STRICT  # All service-to-service traffic must use mTLS

Performance Tradeoffs

FactorLoad Balancer TerminationReverse Proxy Termination
CPU overheadOffloaded to LB hardwareOn your server (AES-NI helps)
Certificate managementFully managedManual (certbot)
TLS session cachingHandled by LBConfigure `ssl_session_cache`
HTTP/2 to backendUsually HTTP/1.1Possible with proxy_http2
CostLB hourly chargeServer CPU (negligible with AES-NI)

Modern CPUs with AES-NI hardware instructions make TLS at the reverse proxy level negligible in terms of CPU — typically under 1% overhead for typical web workloads. The real decision driver is certificate management convenience and compliance requirements.

相关协议

相关术语表条目

更多内容: Production Infrastructure