- cross-posted to:
- programmer_humor@programming.dev
If we’re still using JavaScript in the year 275,760 we deserve the resulting epoch collapse
Bold of you to assume that humanity will even exist at that point. In fact, it’d be pretty bold to assume we’ll exist in 2757; forget those last two digits.
I’m not even sure we’ll be existing in 2057 at this rate
Or even make it till 20:57
Epochalypse…
Javascript will subsume all other languages by then. Humanity won’t even know that others existed, or even what it is. It’ll just be called Script, the way you tell computers what to do when the AI doesn’t understand your prompts correctly.
Who knew The Cosmic AC would be running js.
Thanks, I love oddly comforting techno theology
Fuck, the title got my hopes up.
I’m still thinking about the 2037 problem.
Not to be that guy, but it is the 2038 problem for 32 bit epoch. Check this out: https://en.m.wikipedia.org/wiki/Year_2038_problem
But yeah, that’s a much bigger issue.
No, the 2037 problem is fixing the Y2k38 problem in 2037.
Before that there’s no problem :)
right, my bad.
2038*
This problem is now so old that hardware running it in 2038 will be obsolete
The replacement for the JavaScript Date API is on the cusp of finalization.
They just got an RFC proposal approved by the IETF for an extension to the way datetime strings should be serialized that adds support for non-Gregorian calendar systems. That seems to have been the last round of red tape holding them back. Now it’s just a handful of bugfix PRs to merge and browsers can begin shipping implementations unflagged.
You can watch the progress here if you find it interesting. In the meantime, there is a polyfill out now if you want to get started with it.
All numbers in JS are stored as 64-bit floats, so past a certain point, precision starts to degrade.
Teach me why pretty pls
Well that’s how floating point units work.
The following is more a explanation about the principle than a precise description of float values in programming, since working with binary values has its own quirks, especially with values lower than one, but anyways:
Think about a number noted by a base and an exponent, like
1.000.000
can be represented as1*10^6
.1.000.001
now becomes1,000001*10^6
.If you want more precision or bigger numbers maintaining the same precision, you will have to add further and further decimal places and that hits a limit at a certain amount.
So basically you can either get really high numbers in a floating point unit or you can store really precise small numbers. But you cannot achieve both at the same time.
Alternatively as both
floats
(32 bit) anddoubles
(64 bit) are represented in binary we can directly compare them to the possible values anint
(32 bit) and along
(64 bit) has. That is to say afloat
has the same amount of possible values as anint
does (anddouble
has the same amount of values as along
) . That’s quite a lot of values but still ultimately limited.Since we generally use decimal numbers that look like this
1.5
or3.14
. It’s setup so the values are clustered around 0 and then every power of 2 you have half as many meaning you have high precision around zero (what you use and care about in practice) and less precision as you move towards negative infinity and positive infinity.In essence it’s a fancy fraction that is most precise when it’s representing a small value and less precise as the value gets farther from zero
Thanks!
so is anything in any computer