Why Understanding Time Standards and Network Sync Matters for Developers
This article explains the evolution of time concepts, the development of various clocks, the definitions of GMT, UTC, and atomic time, and how network time protocols like NTP synchronize computers, highlighting practical considerations for using local versus server time in time‑critical applications.
In daily development we often rely on time‑related APIs and logic; this article explores the evolution of time concepts and how network time synchronization works.
What is GMT?
What is UTC (Coordinated Universal Time) and why is it called "coordinated"?
When electronic devices synchronize time over the network, is network latency considered?
Do flash‑sale countdowns use the user's local time or server time?
Time Concepts and the Development of Timing Tools
Historically humans defined time units based on astronomical phenomena (Earth's rotation and revolution) and created timing tools accordingly.
Because of Earth's rotation we experience day and night, defining a "day"; its revolution defines a "year". To achieve finer precision, a day was divided into 24 hours, each hour into 60 minutes, and each minute into 60 seconds, giving the basic unit "second" (1 s = 1/86400 day). Days and years are astronomically measured, while hours, minutes, and seconds are human‑defined.
Reference: https://zhuanlan.zhihu.com/p/400539714
Sundial (日晷)
Sundials use the sun's shadow to indicate time. The most common type is the equatorial sundial, which divides the dial into 24 hours and aligns the gnomon with Earth's axis.
Images illustrate typical equatorial sundials:
Equatorial sundial details: the dial is divided into 24 hours, the gnomon points toward the celestial pole, and the dial plane is inclined by 90° − latitude. It works only when the sun shines and cannot be used near the equinoxes when the sun's rays are parallel to the dial.
Universal Time (UT) and Greenwich Mean Time (GMT)
Because the length of a solar day varies, early astronomers averaged the lengths of all days in a year to obtain a more stable "day" and thus a more consistent second.
With this unit, clocks were built, evolving from pendulum clocks to modern quartz clocks, which achieve a daily error of only one thousandth of a second. In 1927 the first official time standard, Universal Time (UT), was established. GMT (Greenwich Mean Time) is essentially UT measured at the Royal Observatory in Greenwich.
Since Earth's rotation slows irregularly, GMT is no longer used as the primary standard; today the standard is Coordinated Universal Time (UTC) based on atomic clocks.
In programming, GMT appears in time‑zone strings such as "Mon Jun 12 2023 20:38:15 GMT+0800 (China Standard Time)". Here GMT+0800 simply denotes UTC+08:00.
Pendulum Clock
Invented by Christiaan Huygens in 1656, the pendulum clock uses a swinging pendulum to regulate the gear train, typically powered by a wound spring.
The period of a simple pendulum (small angles) is constant and given by:
Real‑world pendulums experience friction and air resistance; the clock overcomes these via the escapement mechanism, which transfers energy from the weight or spring to the pendulum in controlled bursts.
Images show the basic structure of a weight‑driven pendulum clock and its escapement.
Reference: https://zhuanlan.zhihu.com/p/112837661
Quartz Clock
First invented in 1927, quartz clocks became commercially available in 1969 when Seiko released the first quartz watch, starting the "quartz revolution".
A quartz oscillator vibrates at 32 768 Hz (2¹⁵). After 15 stages of frequency division, a stable 1 Hz signal is produced for timekeeping.
Further physical details are omitted.
References: https://zhuanlan.zhihu.com/p/117299794, http://m.chinaaet.com/article/3000120376
Atomic Clock
Because Earth's rotation is irregular and slowing, a more stable reference was needed, leading to atomic clocks.
Cesium atoms exhibit a highly stable transition frequency; counting 9 192 631 770 cycles defines one second. In 1967 the International Committee on Weights and Measures adopted this definition.
Reference: https://www.bilibili.com/read/cv7415350
International Atomic Time (TAI)
TAI counts seconds based on atomic clocks, providing a uniformly precise time scale starting from 1958‑01‑01 00:00:00.
Coordinated Universal Time (UTC)
UTC combines the stability of atomic time with the traditional solar‑based time by inserting leap seconds when the difference between TAI and UT1 exceeds 0.9 s. The first leap second was added on 1972‑01‑01; to date 27 positive leap seconds have been inserted.
Leap‑second data: https://www.hko.gov.hk/sc/gts/time/Historicalleapseconds.htm
UTC replaces GMT as the standard for radio communications (1979).
Unix timestamps ignore leap seconds, assuming exactly 86 400 s per day.
In 2022 the International Metrology Congress voted to discontinue leap seconds after 2035, though they will continue until then.
References: https://www.zgbk.com/ecph/words?SiteID=1&ID=126639&SubID=79803, https://zh.wikinews.org/zh-sg/国际计量局:2035年取消闰秒
Network Time Synchronization
The Network Time Protocol (NTP) synchronizes computer clocks with reference sources (e.g., GPS, atomic clocks) achieving sub‑millisecond accuracy on LANs and tens of milliseconds on WANs.
NTP Stratum Hierarchy
Stratum levels indicate distance from the reference clock:
Stratum 0 : High‑precision devices such as atomic clocks or GPS receivers.
Stratum 1 : Primary servers directly linked to Stratum 0 devices.
Stratum 2 : Servers synchronized to Stratum 1 servers.
Stratum 3 : Servers synchronized to Stratum 2, and so on up to Stratum 15.
Clock Synchronization Algorithm
A typical NTP client polls multiple servers, measures round‑trip delay (δ) and offset (θ) to adjust its clock.
Assuming symmetric network paths, the offset θ is calculated as:
When client‑to‑server and server‑to‑client delays are equal, the synchronization is accurate.
In practice NTP discards samples with delay >128 ms unless no better samples are available for 900 s.
ntpd Daemon
Linux distinguishes hardware clock (battery‑backed) and system clock (kernel‑driven). At boot the system clock reads the hardware clock, then runs independently.
The
ntpddaemon implements the full NTPv4 protocol, adjusting the system clock gradually when the offset is ≤128 ms, or stepping it when the offset is between 128 ms and 1000 s. Offsets >1000 s cause the daemon to exit unless overridden.
Other implementations include OpenNTPD and chrony.
References: https://en.wikipedia.org/wiki/Network_Time_Protocol, https://zhuanlan.zhihu.com/p/523334370, https://docs.ntpsec.org/latest/ntpd.html
Should Flash‑Sale Countdown Use Local or Server Time?
If Using Local Time
Problem: User devices may not have network time sync enabled, leading to inaccurate clocks.
Answer: This risk cannot be eliminated; critical scenarios should avoid local time.
If Using Server Time
Problem: Network latency and code execution time cause variations; users with faster connections or better devices gain an advantage.
Consideration: Could we subtract part of the network delay using an NTP‑like approach?
Answer: The approach is risky; it is safer to rely solely on server time.
Browser environments are more complex than UDP‑based NTP; multiple parallel requests make the "equal round‑trip delay" assumption unreliable.
NTP strategies (multiple servers, discarding outliers, thresholding) are hard to apply in browsers.
If the request latency to the server exceeds the server‑to‑client return latency, subtracting half the delay could make the client clock earlier than the server, causing users to see the sale start before it is actually allowed.
Kuaishou E-commerce Frontend Team
Kuaishou E-commerce Frontend Team, welcome to join us
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.