Unix time counts the time elapsed in seconds since January 1, 1970. The time_t data type used for this on a 32-bit Unix of the 1970s is defined as a signed 32-bit integer and has a maximum value of 2,147,483,647 or 231-1. On all compatible Unix and Unix-like systems of the following decades, the number of seconds elapsed will exceed this capacity on Tuesday, January 19, 2038 at 03:14:07 UTC. By convention, the most significant bit (MSB, usually the leftmost digit as in the example here) is used to distinguish between positive and negative numbers (sign in two’s complement), so that the count jumps into the negative range (e.g. -2,147,483,648 binary 10000000 00000000 00000000 00000000) if the value 2,147,483,647 (binary 01111111 11111111 11111111) is exceeded. If the conversion from Unix time to date and time is insufficiently implemented, this unintentionally leads to the value being interpreted as 20:45:52 on Friday, December 13, 1901 UTC. This problem is referred to as counter overflow in software development.
Without countermeasures, the economic impact could be detrimental, especially as some 32-bit Unix systems that remain ABI-compatible with UNIX are still in use (see POSIX). In contrast to application areas such as Unix servers and PCs, where the transition to a 64-bit architecture can be assumed to be complete, many embedded systems with Unix-like operating systems often continue to use a 32-bit architecture, although their operating time is often many times longer than that of desktop and server systems (e.g. routers, electronic measuring devices, automotive systems, IoT, televisions, devices in plant control and building monitoring/control). In addition, when porting existing software from 32 to 64 bits, it may not have been fully checked whether the 64-bit timestamps are also processed correctly and untrimmed. An adaptation or modernization of corresponding computer systems in companies and institutions, which is at least delayed but necessary in the long term, could reduce the probability of failure.
Just like the Year-2000 bug (Y2K), the root of the 2038 issue lies in a design choice driven by the hardware constraints of its time — saving every precious byte by truncating time data. In 1999, the two-digit year limit threatened to roll clocks back a full century; in 2038, the numerical limit of a 32-bit signed integer could send systems 137 years into the past. Both scenarios demonstrate the same lesson: short-term efficiency gains can turn into large-scale systemic risks decades later if technical debt is allowed to accumulate unchecked.
How the Y2K38 Bug can break timers and Security Checks
An example of a year 2038 error is the validity check using a timestamp: The current time is saved at the start of a process. This can be used to ensure that no more time elapses than specified before the process is completed (for example, automatic logout after a few minutes in online banking to prevent misuse). If the timestamp specified by the system jumps to the year 1901 within such a period, the difference is always negative from then on. However, if the program now expects a positive number with a minimum value (e.g. 5 minutes after the start of the process), it waits in vain for the number to become positive – the number therefore always remains smaller than the target value of 5 minutes, for example. This can lead to security accesses remaining open for an undesirably long time or to endless loops, which can appear to the end user as if the program is not responding.
In the Unix environment, the transition from 32-bit to 64-bit architectures has led to the “long” base type of the C programming language being expanded from 32 bits to 64 bits (technically: conversion from ILP32 to LP64 model, see data types in C). This data type corresponds to the traditional definition of time_t as the largest available base type before the “long long” base type was introduced as standard with C99 and UNIX98. Such 64-bit systems have thus been converted to a POSIX timestamp with 64-bit seconds since January 1, 1970, which means that they have been working reliably for 292 billion years.
Nevertheless, switching to new 64-bit processor architectures (x64, Itanium/IA-64, IBM Power 5, UltraSPARC, PA-RISC, MIPS, ARMv8) alone is not enough: Although it simplifies system-side adaptation, this does not eliminate the need to comb through and recompile all programs with rigid 32-bit formatting. Most programs have now been adapted for 64-bit architectures, but it is still easily possible that at various points in the program the 64-bit timestamp supplied by the system is incorrectly processed as a 32-bit value and therefore only the low-order 32 bits are queried, which in turn assume the value -231 = 13 December 1901 on 19 January 2038. 32-bit programs also remain in use, e.g. on 64-bit Multilib systems, which may not be able to be adapted due to existing ABI compatibility. In both cases, i.e. for 32-bit programs (or 64-bit-capable programs that have since been adapted but have been compiled for 32-bit systems) and for 64-bit programs that have not been fully revised and checked, 32-bit timestamps may still be in use.
In order to make 32-bit systems and programs that are still in use usable beyond 2038, some operating systems have also changed the definition of time_t to 64-bit for 32-bit architectures. This is the case with NetBSD from version 6.0 of 2012, OpenBSD from version 5.5 of 2014 and the Linux kernel from version 5.6 of 2020. Although the use of 64-bit alternatives had already been proposed at the application level, the old binary interface (ABI) was usually retained, as otherwise a complete recompilation of all 32-bit programs would have been necessary. For individual programs, the GNU C library, one of the widely used C standard libraries, had already introduced a time64_t definition for the conversion.[11] A similar definition __time64_t is used in the Windows environment, and with Visual C++ 2005 the default was changed to time64. [12] Because the widespread x86 distributions were already discontinued around 2020 for 32-bit “IA-32” (sometimes also referred to as “i386”) and completely inherited by their 64-bit variants (x64, with the alternative designations “amd64” or “x86-64”), most Linux distributions never completely solved the year 2038 problem until the end, but remained ABI-compatible as a result.
The 32-bit distributions that still exist began converting all 32-bit timestamps in existing software to 64-bit data types throughout 2020. Debian GNU/Linux, for example, only announced in 2024 that it would completely convert all existing 32-bit ports (e.g. ARMv7), with the exception of the 32-bit x86 architecture (i386 and i386-hurd). The x86 architecture in particular has undergone major changes since the switch to 64-bit, such as twice as many registers, which can also benefit 32-bit applications – but only in the 64-bit operating mode of amd64 or x64. However, because this is not fully compatible with i386 or IA-32, a new binary interface (ABI) was created under Linux with “x32”, which – as a new development – did not have to be compatible with the existing 32-bit x86 ABI (i386). Because x32 runs in native 64-bit mode, some data types also remain 64-bit, including time_t, so that 32-bit programs on 64-bit “x32” systems are affected in exactly the same way as under x64 or, if existing software is adapted correctly, are not affected.
Another workaround is to switch programs from the Unix time counter to a new time base; for example, counting milliseconds or microseconds with 64-bit counters (which do not necessarily require a 64-bit architecture) is already common, especially in embedded systems with real-time requirements on this scale. Newer time APIs always use a greater precision and span than Unix time, for example Java System.currentTimeMillis (64-bit milliseconds since January 1, 1970; sufficient for 292 million years) and .NET System.DateTime.Now.Ticks (64-bit 10ths of a microsecond since January 1, 0001; sufficient for 29227 years). The database-supported transactions often use TIMESTAMP values, which are defined in the SQL92 database standard with an accuracy in microseconds (also accessible in ODBC/JDBC) and whose representation in databases usually takes place as a distance to the day counter (SQL DATE), whereby the day counter also has a larger span in 32 bits (the underlying epoch of the day counter is very different, however). If these data types for timestamps are used throughout the program, the limitations of the Unix time counter are eliminated.
Another workaround is to save the timestamp as a character string, as provided for in ISO 8601, as a YYYYMMDDhhmmss timestamp, e.g. “20140823142216”. These are spared from year overflow problems at least until December 31, 9999 23:59:59, unless the internal operations (e.g. to calculate the difference between two timestamps) convert them into a problematic binary format.
The Year 2036 Problem: Another Date limit ahead
Closely related to the year 2038 problem is the year 2036 (numeronym: Y2K36). On Thursday, 7 February 2036 at 06:28:16 UTC, the counter of the time synchronization protocol NTP (Network Time Protocol) originally developed for UNIX will overflow. Although this problem has been solved in modern implementations (see RFC 5905), many devices – especially embedded systems – still work according to the old RFC 868 standard.
Here, too, the background is that the time is transmitted as a 32-bit number in seconds, but with the start time January 1, 1900, 00:00:00 UTC and unsigned. If the systems are implemented very properly, there will be no (major) problems during the calculation, as the time synchronization should work according to a difference method.
However, invalid values can also occur – depending on the implementation – when working in both NTP and UNIX format:This applies in particular to the growing number of embedded systems in the context of the Internet of Things, where the system time must first be determined in NTP format via time servers after each start due to the lack of a battery-supported real-time clock, but this is then converted to the usual Unix time for further representation (by deducting the time difference). If the time is specified here as 0 after (not infrequently) failed connection attempts and accordingly as 1900-01-01 00:00:00 UTC (can only be represented as a negative value in Unix time), unsecured conversions to UNIX format would lead to an invalid and (if signed) also negative value, with comparable effects on the system and its behavior.