Server latency is the time delay between when a server receives a request and when it begins sending a response, representing the duration required for the server to process the request. It measures server-side processing efficiency independent of network transmission time.
Server latency is a component of overall Time to First Byte and depends on factors like server computational resources, database query efficiency, application code optimization, and current server load. High server latency can result from complex database queries, inefficient code, insufficient server resources, or processing bottlenecks. It's distinct from network latency, which measures transmission time across network infrastructure.
In A/B testing, server latency differences between variations can confound test results by introducing performance disparities unrelated to the design or content changes being tested. If a server-side implementation causes one variation to have higher latency, any observed conversion differences may reflect page speed impact rather than the actual changes being tested. Monitoring server latency ensures test integrity and accurate attribution of results.
A test comparing two product recommendation algorithms shows the new algorithm performing 12% worse. Investigation reveals the new algorithm's complex calculations add 600ms of server latency per page load, causing the performance drop. The test actually measured server performance, not the quality of recommendations, requiring optimization before valid testing can occur.
This comprehensive checklist covers all critical pages, from homepage to checkout, giving you actionable steps to boost sales and revenue.