- commit
- cc57ba266c868a0fcb88a9da57dd76dc5481ad6f
- parent
- fb7432abd229e95fc2aea60ebe2fff6f83ddf9cc
- Author
- Tobias Bengfort <tobias.bengfort@posteo.de>
- Date
- 2026-04-20 16:31
post: float time
Diffstat
| A | _content/posts/2026-01-16-float-time/index.md | 70 | ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ |
1 files changed, 70 insertions, 0 deletions
diff --git a/_content/posts/2026-01-16-float-time/index.md b/_content/posts/2026-01-16-float-time/index.md
@@ -0,0 +1,70 @@
-1 1 ---
-1 2 title: What happens if we represent unix time as floats?
-1 3 date: 2026-01-16
-1 4 tags: [code, math, time]
-1 5 description: "When evaluating the performance of some software component, I want to get high precision. But when I talk about millions of years in the future, I don't care about the exact second."
-1 6 ---
-1 7
-1 8 Unix time is the number of seconds that have elapsed since the epoch
-1 9 (1970-01-01 00:00:00 UTC). To my knowledge, is is usually expressed as an
-1 10 integer, which gives us fun issues like
-1 11 [Y2K38](https://en.wikipedia.org/wiki/Year_2038_problem).
-1 12
-1 13 Floating point numbers (or floats) are great because they can represent both
-1 14 tiny and huge numbers. The downside is that they loose precision as they get
-1 15 farther away from 0.
-1 16
-1 17 That sounds like a good fit for measuring time. I want to get high precision
-1 18 when evaluating the performance of some software component (right now). But
-1 19 when I talk about millions of years in the future, I don't care about the exact
-1 20 second.
-1 21
-1 22 So let's have a quick look at the math and see if this is workable.
-1 23
-1 24 ## Basics
-1 25
-1 26 An IEEE 754 float consists of a single bit for the sign $s$, $i$ bits for the
-1 27 mantissa $m$, and $j$ bits for the exponent $e$. Its value then is:
-1 28
-1 29 $$x = s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e$$
-1 30
-1 31 We get the smallest possible increment for a given number by increasing the
-1 32 mantissa by 1:
-1 33
-1 34 $$
-1 35 s \cdot (1 + (m + 1) \cdot 2^{-i}) \cdot 2^e - s \cdot (1 + m \cdot 2^{-i}) \cdot 2^e
-1 36 = s \cdot 2^{e - i}
-1 37 \approx x \cdot 2^{-i}
-1 38 $$
-1 39
-1 40 ## The actual numbers
-1 41
-1 42 For 32-bit floats, $i$ is 23 and the exponent ranges from -127 to 128. That means:
-1 43
-1 44 - The smallest possible increment (close to 0, i.e. on 1970-01-01) is $2^{-150}$ seconds, which is very very small.
-1 45 - The largest possible value is about 10 nonillion years in the future, at which point the increment will be $2^105$ seconds, about a septillion years.
-1 46 - The increment at the time of writing is roughly 211 seconds, which is much to coarse for most use cases.
-1 47
-1 48 We can also invert the calculation to find out when we cross certain thresholds:
-1 49
-1 50 - We reached a single second of precision some time around 1970-04-08
-1 51 - We reached a minute of precision some time around 1985-12-13
-1 52 - We will reach an hour of precision some time around 2926-12-20
-1 53 - We will reach a day of precision in the year 24,937
-1 54
-1 55 For 64-bit floats, $i$ is 52. That means:
-1 56
-1 57 - The increment at the time of writing is roughly 0.4 nano seconds.
-1 58 - We will reach a full nano second of precision some time around 2112-09-18
-1 59 - We will reach a micro second of precision in the year 144,683.
-1 60
-1 61 ## Conclusion
-1 62
-1 63 32-bit floats are probably too coarse. But using 64-bit floats for measuring
-1 64 time might actually be a good idea. They can represent plenty of time in the
-1 65 past and future. And they provides nano second precision for the current time.
-1 66
-1 67 What I found surprising is how quickly the precision deteriorates. For 32-bit
-1 68 floats, it starts out at $2^{-150}$ seconds and already reaches a single second
-1 69 4 months later. So if you need extremely high precision, your best bet is
-1 70 probably to define a custom epoch.