A Unix timestamp is the number of seconds that have elapsed since 1 January 1970 00:00:00 UTC. This moment is known as the Unix epoch, and it forms the foundation of timekeeping in computing.
Why Unix Time Matters
Unix time has one massive advantage over human-readable dates: it carries no timezone ambiguity. A Unix timestamp is always UTC. There is no DST shift, no regional interpretation, no AM/PM confusion.
When you see 1735689600, it means exactly one thing: 1 January 2025 00:00:00 UTC. Every system in the world will interpret this identically.
Reading Unix Timestamps
Unix timestamps come in two common formats:
- Seconds — A 10-digit number like
1735689600 - Milliseconds — A 13-digit number like
1735689600000
To convert mentally, note that:
1,700,000,000 ≈ November 2023 1,750,000,000 ≈ June 2025 1,800,000,000 ≈ January 2027 1,900,000,000 ≈ June 2030
The Y2K38 Problem
On 19 January 2038, at 03:14:07 UTC, 32-bit Unix timestamps will overflow. The maximum value a signed 32-bit integer can hold is 2,147,483,647 seconds past the epoch — and that moment occurs in 2038.
Systems still using 32-bit timestamps will experience dates wrapping around to 1901 or behaving unpredictably. Most modern systems use 64-bit timestamps, which extend the usable range by billions of years.
Key Advice
When storing or transmitting times between systems, Unix timestamps eliminate timezone ambiguity entirely. Convert to local time only at the display layer.
For APIs and databases, Unix time (in seconds or milliseconds) is often the safest choice. It requires no timezone context to interpret and compares trivially with arithmetic operators.