HTTP Fundamentals

HTTP Request Lifecycle: From URL to Response

Trace every step of an HTTP request — DNS resolution, TCP handshake, TLS negotiation, request/response exchange, and connection reuse.

The Full Journey of an HTTP Request

When you type https://api.example.com/users/42 and press Enter, a cascade of network operations unfolds before a single byte of response data arrives. Understanding each step is essential for diagnosing latency, configuring timeouts, and optimizing your application's time-to-first-byte.

Step 1: URL Parsing and DNS Lookup

URL Parsing

The browser (or HTTP client) first parses the URL into components:

https://api.example.com:443/users/42?include=address#section
│      │               │   │        │              │
scheme host            port path     query          fragment

The fragment (#section) is never sent to the server — it is handled entirely client-side. The scheme determines the default port: http → 80, https → 443.

DNS Resolution Chain

Before connecting, the client must resolve api.example.com to an IP address. The lookup follows a chain of caches:

  • Browser DNS cache — stores recent lookups (respects TTL)
  • OS resolver cache/etc/hosts entries, then system cache
  • Recursive resolver — typically your ISP or a configured DNS server (8.8.8.8)
  • Root nameservers.com TLD nameservers → example.com authoritative nameserver
# Trace the full DNS resolution path:
dig +trace api.example.com

# Check cached TTL:
dig api.example.com | grep TTL

DNS lookups typically take 20-120ms on a cold cache. For latency-sensitive applications, use DNS prefetching:

<link rel="dns-prefetch" href="//api.example.com">

Step 2: TCP Connection Setup

The Three-Way Handshake

TCP requires a handshake before any application data flows:

Client                    Server
  │──── SYN ────────────────▶ │   (client: 'want to connect')
  │◀─── SYN-ACK ─────────────│   (server: 'ok, ready')
  │──── ACK ────────────────▶ │   (client: 'confirmed')
  │──── [HTTP Request] ──────▶ │   (now we can send data)

This adds one round-trip time (RTT) before any data is sent. On a 50ms RTT connection (e.g., cross-continent), this is 50ms of pure setup overhead.

TCP Fast Open

TCP Fast Open (TFO) allows sending data in the SYN packet on repeat connections. The client receives a TFO cookie on the first connection and reuses it later, eliminating the handshake RTT. Enable on Linux:

# Enable TCP Fast Open (server side)
echo 3 | sudo tee /proc/sys/net/ipv4/tcp_fastopen

Connection Pooling

HTTP clients maintain a pool of established TCP connections to avoid re-handshaking on every request. Configure pool size based on concurrency needs:

import httpx

# httpx connection pool: 100 connections, 20 per host
client = httpx.Client(
    limits=httpx.Limits(max_connections=100, max_keepalive_connections=20)
)

Step 3: TLS Negotiation

For HTTPS, TLS negotiation happens after the TCP handshake — adding another 1-2 RTTs on the first connection.

TLS 1.3 Handshake

TLS 1.3 (RFC 8446, 2018) reduced the handshake from 2 RTTs (TLS 1.2) to 1 RTT:

Client                         Server
  │──── ClientHello (+ key share) ──▶ │   (RTT 1 start)
  │◀─── ServerHello + Certificate ───│
  │◀─── Finished ─────────────────── │   (RTT 1 end)
  │──── Finished + [HTTP Request] ──▶ │   (data flies)

TLS 1.3 also supports 0-RTT resumption for repeat connections, allowing the client to send application data in the very first packet — at the cost of reduced replay attack protection.

Certificate Validation

The server presents a certificate chain. The client verifies:

  • The certificate is signed by a trusted CA (from OS trust store)
  • The hostname matches the certificate's SAN (Subject Alternative Name)
  • The certificate has not expired
  • The certificate has not been revoked (OCSP stapling)

ALPN Protocol Negotiation

Application-Layer Protocol Negotiation (ALPN) lets the client and server agree on which HTTP version to use during the TLS handshake — avoiding an extra round-trip:

ClientHello: ALPN: ['h2', 'http/1.1']
ServerHello: ALPN: 'h2'   # Server chose HTTP/2

Step 4: HTTP Request/Response Exchange

Request Structure

An HTTP/1.1 request is plain text:

GET /users/42 HTTP/1.1\r\n
Host: api.example.com\r\n
Accept: application/json\r\n
Authorization: Bearer eyJ...\r\n
\r\n

HTTP/2 frames the same request in binary, with header compression (HPACK) and a stream ID enabling multiplexing.

Server Processing Time

Time to First Byte (TTFB) measures the gap between the request arriving and the first byte of the response leaving the server. High TTFB indicates:

  • Slow database queries
  • Expensive computation
  • Network latency to upstream services
  • Resource contention (GIL, thread pool exhaustion)
# Measure TTFB with curl:
curl -w "DNS: %{time_namelookup}s\nConnect: %{time_connect}s\n"
     "TLS: %{time_appconnect}s\nTTFB: %{time_starttransfer}s\n"
     -o /dev/null -s https://api.example.com/users/42

Response Streaming

For large responses, the server can stream data using chunked transfer encoding (HTTP/1.1) or HTTP/2 data frames. The client can begin processing before the full response arrives — critical for large JSON arrays or file downloads.

Step 5: Connection Lifecycle and Reuse

Keep-Alive (HTTP/1.1)

HTTP/1.1 connections are persistent by default (Connection: keep-alive). After a response, the connection stays open for subsequent requests. This avoids re-running the TCP + TLS handshakes:

# Without keep-alive: 3 TCP+TLS handshakes for 3 requests
DNS (50ms) + TCP (50ms) + TLS (50ms) + Request × 3 = 450ms overhead

# With keep-alive: 1 TCP+TLS handshake, 3 requests pipelined
DNS (50ms) + TCP (50ms) + TLS (50ms) + Request × 3 = 150ms overhead

HTTP/2 Multiplexing

HTTP/2 goes further: multiple requests share a single TCP connection simultaneously. Each request gets a stream ID, and frames from different streams are interleaved:

Stream 1: GET /users/42      ──────────────── Response 1
Stream 3: GET /posts/recent  ──────── Response 3
Stream 5: GET /notifications ────────────────────── Response 5
          (all on one TCP connection, no head-of-line blocking between streams)

Connection Close

Connections close when:

  • The server sends Connection: close
  • The keep-alive timeout elapses with no traffic (typically 60-120 seconds)
  • A network error occurs
  • The server sends a GOAWAY frame (HTTP/2) signaling no new streams
# Nginx: adjust keepalive_timeout for upstream connections
upstream backend {
    server 127.0.0.1:8000;
    keepalive 32;  # maintain up to 32 idle connections
}
keepalive_timeout 65s;  # for client connections

Putting It All Together: Timing Breakdown

For a typical HTTPS request from a user in Europe to a US server (100ms RTT):

DNS lookup:        ~80ms  (cold cache)
TCP handshake:    ~100ms  (1 RTT)
TLS 1.3:          ~100ms  (1 RTT, first connection)
Request + TTFB:   ~120ms  (100ms RTT + 20ms server)
Response body:     ~50ms  (depending on size)
─────────────────────────
Total:            ~450ms  (cold connection)
Total (warm):     ~170ms  (connection reused, DNS cached)

This breakdown guides optimization priorities: CDN edge nodes cut RTT, HTTP/2 eliminates connection overhead, and connection pooling avoids repeated TLS handshakes.

संबंधित प्रोटोकॉल

संबंधित शब्दावली शब्द

इसमें और HTTP Fundamentals