Static Site Performance is Not a Tradeoff

Performance is usually framed as a tradeoff. More features cost more compute. More dynamic content costs more latency. More personalisation costs more infrastructure. Static sites invert this framing. The most performant architecture is also the simplest one.

A static HTML file served from a CDN edge node requires no server-side computation. There is no database query, no template rendering, no session lookup, no API call. The file exists on disk, or more accurately in memory at the edge, and is returned to the client. The latency is dominated by network physics, not by application logic.

Time to first byte for a well-configured static site is typically under 50 milliseconds from any major region. This is not the result of optimisation work. It is the default outcome of the architecture. There is nothing to optimise because there is no computation to eliminate.

The comparison with a dynamic CMS is not subtle. A WordPress page load involves PHP execution, at least one database query, often several, template rendering, and plugin hook execution. An optimised WordPress installation with aggressive caching can approach static performance for cached pages. But caching is a workaround for a performance problem that the static architecture does not have.

The practical implications extend beyond page load time. Static sites handle traffic spikes without configuration. There is no application server to scale, no database connection pool to exhaust, no memory limit to hit. The CDN absorbs the load. A static site that handles ten requests per day handles ten thousand without modification.

This is not an argument against dynamic applications where dynamic behaviour is required. It is an argument for recognising that most publishing does not require dynamic behaviour, and that choosing a dynamic architecture for a static workload is a decision with ongoing costs — in complexity, in maintenance, and in performance — that the static architecture simply does not incur.