Unix Timestamps and Time Zones: From Epoch to ISO 8601

Everything developers need about Unix timestamps: epoch time, UTC vs local time, ISO 8601 formatting, leap seconds, the 2038 problem, and cross-language handling.

What a Unix timestamp is

A Unix timestamp is the number of seconds that have elapsed since 00:00:00 UTC on January 1, 1970 — known as the Unix epoch. It is a single integer representing an absolute point in time, independent of time zones or calendar representations.

This simplicity is its power. Storing timestamps as integers eliminates ambiguity about time zones, daylight saving, and date formats. Two servers on different continents agree exactly on the meaning of 1704067200 — it is the same instant for both, regardless of which clocks display what.

Seconds, milliseconds, and nanoseconds

Unix timestamps originally counted whole seconds. Many modern systems count milliseconds since epoch (13-digit integers instead of 10) for sub-second precision. JavaScript’s Date.now() is the most familiar example. Some systems use microseconds or nanoseconds for tighter precision.

Mixing units is a classic bug source. 1704067200 in seconds is January 1, 2024. 1704067200 in milliseconds is January 20, 1970. Always verify the unit when ingesting timestamps from an unfamiliar system.

UTC and local time

Coordinated Universal Time (UTC) is the reference time standard — essentially Greenwich Mean Time without daylight saving. Every Unix timestamp is relative to UTC. Converting to local time requires applying the time zone offset of the observer.

Storing all timestamps in UTC (or as Unix integers, which are equivalent) and converting to local time only at the display layer is universally recommended. The opposite approach — storing in local time — creates nightmares during daylight saving transitions, server relocations, and cross-region deployments.

ISO 8601 formatting

ISO 8601 defines the international standard for date and time representation. Its format YYYY-MM-DDTHH:MM:SS±HH:MM (or with a "Z" for UTC) is unambiguous, sortable as text, and readable by every modern programming language.

  • UTC: 2024-01-01T00:00:00Z or 2024-01-01T00:00:00+00:00
  • With offset: 2024-01-01T03:00:00+03:00 (3 AM local time in a +3 zone)
  • Date only: 2024-01-01
  • With fractional seconds: 2024-01-01T00:00:00.123456Z
  • Week-based: 2024-W01-1 (Monday of ISO week 1 of 2024)

Daylight saving time

Daylight saving transitions create two especially tricky moments: the "fall back" hour that occurs twice (e.g., 1:30 AM can refer to two different instants), and the "spring forward" hour that doesn’t exist. Naive local-time storage hits both problems head-on.

The robust approach: store UTC timestamps, display in local time, and use the IANA time zone database (tzdata) via libraries (pytz/zoneinfo in Python, java.time.ZoneId, dayjs/luxon in JS). The tz database handles every historical DST rule worldwide and is updated as governments change their rules.

Leap seconds

Occasionally, a leap second is added to UTC to keep it synchronized with astronomical time. Unix time, for simplicity, ignores leap seconds — the day always has exactly 86,400 seconds regardless. This mismatch causes occasional subtle bugs in systems that compare Unix time to atomic time.

For most application logic, the mismatch is irrelevant. For precision timekeeping (financial exchanges, scientific instrumentation), TAI-based time or GPS time is preferred. Google and others pioneered "leap smearing" — distributing the leap second across 24 hours — as a practical workaround.

The Year 2038 problem

Older Unix systems stored timestamps as signed 32-bit integers, which overflow at 2,147,483,647 seconds — 03:14:07 UTC on January 19, 2038. After that instant, 32-bit Unix time wraps to 1901.

Modern 64-bit systems use signed 64-bit integers, which overflow in the year 292,277,026,596 — comfortably beyond any practical concern. However, legacy systems, embedded devices, and some file formats still use 32-bit time. Migration before 2038 is an active concern in industrial automation, aerospace, and older database systems.

About the author
RC
Renato Candido dos Passos
Fundador e especialista em Blockchain, Fonoaudiologia e Finanças

Founder of UtilizAí, with a background in Blockchain, Cryptocurrencies and Finance in the Digital Era, plus complementary studies in Theology, Philosophy and ongoing coursework in Speech-Language Pathology. Learn more.

Frequently asked questions

How do I get the current Unix timestamp?

In Python: int(time.time()). In JavaScript: Math.floor(Date.now() / 1000) for seconds, or Date.now() directly for milliseconds. In a Unix shell: date +%s. Every modern language has a standard library function returning it as an integer.

Should I store dates as strings or as integers?

For absolute instants, Unix timestamps (integers) or ISO 8601 strings both work well. Integers are compact and arithmetic is easy; ISO 8601 strings are human-readable and preserve time zone information when needed. For calendar events (birthdays, holidays), store as dates, not instants.

What is the difference between UTC and GMT?

For practical purposes, they are the same. Technically, GMT is solar time at Greenwich; UTC is atomic time kept within 0.9 seconds of GMT via leap seconds. Modern computing always means UTC when it says GMT.

How do I handle daylight saving in my application?

Never store local time directly. Store UTC (or Unix time), note the user’s IANA time zone (e.g., America/New_York), and convert at display time using a library that reads tzdata. This handles every historical and future DST rule correctly.

Why does my timestamp look wrong by exactly 1000x?

Seconds-vs-milliseconds confusion. A 10-digit integer near 1.7 billion is likely seconds (2024); a 13-digit integer near 1.7 trillion is likely milliseconds. JavaScript, Java, and many modern APIs default to milliseconds, while Unix shells, Python’s time.time(), and traditional systems use seconds.

Related guides