Or is the author keeping that close to the vest?
> I just want to see it. Just once. I want to watch that earthquake ripple through all of global electronic timekeeping. I want to see which organisations make it to January morning with nothing on fire.
> Tidal Energy Is Not Renewable
> It is incorrect to consider tidal power as renewable energy. Harnessing tidal energy will pose more severe problems than using fossil fuels. ... Tides are induced by the rotation of the Earth with respect to the gravity of the Moon and Sun. The rotational energy of the Earth is naturally dissipated by tides slowly. Consuming tidal energy further reduces the rotational energy, accelerates the energy loss rate, and decelerates the rotation of the Earth.
https://cs.stanford.edu/people/zjl/pdf/tide.pdf
(I've not evaluated these claims in detail, I just thought you may be interested.)
By "completely wrong" do you contend that using tidal energy doesn't decelerate the Earth's rotation?
(Coastlines already form natural "dams". All we would do is move the coastline inward a bit, so as the tide flows through it it spins turbines. The water gets moved the same. The energy is already dissipated at the coasts, it would happen at the dams. Total dissipated energy is the same this way. Okay, if we start putting 4 km tall dams in the middle of the ocean we might somehow meaningfully change the viscosity.)
The thesis in the PDF seems to be that somehow we can push the brakes more. Maybe. (So if we let the Sun/Moon grab some water then use the Earth to get it moving again, then it necessarily slows down the rotation. But there's already a lot of energy in moving and compressing water itself, sure it's "uncompressible" compared to gases, but there's plenty of molecular stuff to move around where energy can go. There's some tidal heating too.)
It's verbose, but in a bad way, and it's structured in a way that makes it hard to comprehend. (And the whole pro-fossil-fuel framing is just .. huh.)
> Because there's nothing computer programmers handle better than special cases which only occur every hundred years or so. How in the world could this be an improvement?
If it's less than 60 times as disruptive, it's an improvement.
> everybody's going to hate it at least sixty times as much as they hate leap seconds now
I doubt that.
> If something is difficult, you do it more often.
It has to be more often than leap seconds to really get those gears oiled. Unless this is a proposal to do leap deciseconds, I don't think this method reduces the pain.
> But before 1972 UTC and TAI were kept in much closer synchronisation [...] (No, I'm not advocating returning to this state of affairs.)
Coward! But seriously, I think this hurts the previous argument significantly.
Yeah, if we're not going to abolish leap everything in UTC completely (which I think we should) then leaps need to happen at most quarterly. When the last leap second came through it was long enough since the one before that the bugs in Linux had been fixed and then unfixed.
Unless, of course, you are a satellite.
Converting into normal units has no reason to care about leap seconds when it can use the actual offset and be a few orders of magnitude more precise.
If synchronizing our clocks with the Sun is still important by then, then announce it a decade or two beforehand, and make a large ceremony out of the thing.
You could use TAI-ish timestamps right now, but people are going to get confused and make mistakes if your timestamps are almost UTC but several seconds off. And if UTC stays constant then you need to nudge all the time zones around and that sounds even worse.
I think the later is less fraught with problems and doesn't require software changes in order to realize updates.
You're going to have to deal with the offset one way or another. I'm simply suggesting a means to keep the offset but not have to worry about how to realize that in terms of complicated hardware hacks.
That's a choice we have right now, today, and as far as I'm aware nobody chooses TAI for their systems. I think that says a lot.
> You're going to have to deal with the offset one way or another.
Most things don't really care. Oh the clock's off by 1.7 seconds, let's make an adjustment and check again tomorrow. Leap seconds are below the noise floor.
But I guess they decided to keep that invariant, which makes negative leap seconds really hard. Maybe that’s a good trade off.
Of course, it's a near certainty that it won't work just fine. But it should.
// We're seeing panics across various platforms where consecutive calls
// to `Instant::now`, such as via the `elapsed` function, are panicking
// as they're going backwards. Placed here is a last-ditch effort to try
// to fix things up. We keep a global "latest now" instance which is
// returned instead of what the OS says if the OS goes backwards.
//
// To hopefully mitigate the impact of this, a few platforms are
// excluded as "these at least haven't gone backwards yet".
Why exclude any platforms, just in case they start going backwards for whatever reason?
Having a mutex right there in the hot path of Instant::now is not great for performance. You expect getting monotonic time to be very fast generally, and some code is written with that assumption (i.e. tracing code measuring spans).
Sometimes the OS is broken in a way Rust can fix, for example Rust's current std::sync::RwLock actually does what you wanted on Windows, the C++ std::shared_mutex doesn't. It's documented as working, but it doesn't because the OS is broken and the fix is just on their internal git "next release" branch, not in the Windows you or your customers are running.
But sometimes you're just out of luck. Some minority or older operating systems can't do std::fs::remove_dir_all correctly, so, too bad you get the platform behaviour. It's probably fine, unless it isn't, in which case you should use a real OS.
Contrast this with the GPS or PTP timescales, which simply count seconds since a well-defined epoch. Formatting the date and time for humans to read is a sensibly separate process.
I'll admit that it's not that simple: there are repeating fields in that message to avoid having to 12.5 minutes between each update.
They could have kept the week number in the repeating message at 10 bits while having a non-repeating MSB?
The time code gets 300 bits. It lasts 6 seconds and it's repeated every 30 seconds. Critically, it will always occur precisely on the top (:00 seconds) of the minute, or the bottom (:30 seconds) of the minute.
Interestingly, most of the time code is used for /correction/ parameters, as precise values for these are required to accurately calculate time. Satellites themselves drift in performance as they age so these parameters are not static for the lifetime of the deployment.
If you want to get into the nitty gritty, again, made in the 70s, so it's pretty approachable today: https://www.gps.gov/technical/ps/1995-SPS-signal-specificati...
And, yes, I understand that time message gets sent at a much higher rate, hence the “it’s not that simple” part.
But I still have a hard time to believe that it would have been impossible to find 2 or 3 bits in the overall message to include the MSBs for the weeks. GPS units with built-in support for WNRO (due to permanent storage) would be able to ignore it, other units would be able to correct the epoch after 12 minutes (or a multiple in case of data corruption.)
Posix committee thought it would be very convenient if you can get time of day by doing time()%86400.
Very convenient indeed...
There Is No AntiTimekeeping Division ...
can you elaborate on the purpose of this? is it for legal requirements? some kind of "black box" recorder? why does timezone matter for such low-level event data?
If something goes wrong, a lot of parties both internal and external are interested in reviewing the (extremely detailed) logs for the purposes of insurance, or court-martial, or whatever.
Since the list of delta updates is queued, you could bake N years of updates into your system if it’s a fire-and-forget OS. A number in between 5 and 10 seems reasonable to me, but seeing as we are discussing living with 50 years of drift it could be higher. And sure, for this to work, allow non-integer deltas again; or just have a rule that you round the applied adjustment down.
Seems no harder to bake in the last 50 years of leap seconds as to include to Olson TZ database in your distro.
Yet another possibility is to put any possible kind of leap second at the beginning of the year. The only requirement set by past CGPMs is that dUTC = UT1 - UTC should be in [-0.9, +0.9] seconds, so we can always put a positive leap second when dUTC is in [-0.9, -0.1]s and a negative leap second when dUTC is in [+0.1, +0.9]s, dramatically increasing the number of leap seconds without violating the dUTC requirement. If the dUTC requirement is a bit more relaxed (say, [-2, +2] seconds) then we can even mandate leap second every year!
But well no, I'm strongly against the current form of leap seconds because it is already problematic in the short term and will be yet useless in the long term. Recall that ΔT is expected to be quadratically increasing in the long term; this means that the effectiveness of leap seconds is limited to the point when any small fixed number of leap seconds per year (12 in the current system, but anything larger than 1 will cause a problem) is no longer sufficient, and that point is not far from the point where the magnitude of ΔT is significantly large and leap seconds are absolutely required. As DST demonstrates, the world is probably fine with ~2,000 seconds of ΔT, which wouldn't happen before 2500, but at that point we would already start using double leap seconds per year at average. We would probably abolish leap seconds for that reason alone.
If this is not enough to ping-pong around the correct result, because you're drifting too fast, just increase the rate of changes, now that the system is well oiled.
That said, we should not have leap seconds, just timezones around TAI. If UTC wants to be a timezone that changes every six months with a second offset, so be it. Bureaucratic systems are already in place to solve that.
However, irrespectively, it's basically never solar noon at noon if someone is using time zones rather than local solar time, which of course would be a custom timezone for that exact location.
The time of year does matter a bit, but the effect is smaller than the effect of timezones. Their equation of time article includes a nice graph of the variations in time of solar noon at Greenwich. https://www.timeanddate.com/astronomy/equation-of-time.html
And a related thing, the analemma - the sun's position at the same place on earth at solar noon over the course of a year doesn't resemble an arc, it's a figure-8: https://en.wikipedia.org/wiki/Analemma
Moving backwards means you (your code) experiences the same time twice.
So from my point of view, they were spot on
Are there timezone areas in the world which change their UTC differential fluidly throughout the year and by the yard to pin midday to the highest position of the sun?
I'll definitely say that that'd not only surprise me but blow my mind if true!
If there isn't ... Then what was your point?
there is always some drift between the suns position and midday
Your point is about as coherent as saying CO2 levels in the air aren't going higher, because the sea is blue
We're the exception, not the rule.
Brazil, the largest country in South America, had DST until very recently (2019). A large chunk of South America observed DST at some point. Some stopped in the 90s, others stopped much more recently.
This gets very hard to reason about because Arizona doesn't _do_ daylight savings. Like, what happens when we request 24 hours of data from them, on the day daylight savings time flips over? Do we want midnight-to-midnight data, or do we actually want 24 hours of data, which might wind up timestamped differently since there's a duplicate 2AM one day and a missing 2AM another day.
A few years ago we had a contractor write some extremely messy code to handle cases like that, and soon it's going to be my job to try and refactor it into something readable.
I shudder to think how many bugs must show up for people who use right-to-left languages or use non-ascii charsets - especially before Unicode & emoji were popular. I’m ashamed to admit I don’t even know how to test if my software works properly in languages like Arabic.
Of course software made in the Bay Area assumes the whole planet uses pacific time. I’m sorry to throw shade, but that’s entirely in character for the area.
It's not just software. EVs struggle with cold and hot climates. They'll get better, of course. But with the engineers living in mostly temperate climates, conditions outside the development environment get less upfront attention.
Not in the Bay Area, but every other west-coast company I've been involved with is pretty militant about using UTC, typically due battle wounds from communication challenges or time-related bugs.
I would be very curious to hear which other major companies are deploying systems on local time in 2024.
UTC has leap seconds, unix timestamp doesn't. It's defined to have 86400 seconds per day, so there's no place to put a leap second. Instead, it either duplicates a timestamp or does "leap smearing" - slightly changing the duration of a second around where the leap second is.
Because DST might be a thing right now in that zone, but may no longer be next year, in which case the historical DST is needed to refer to times in the past.
Same for dates a bit further back when countries changed calendars and some days are missing.
So, we basically need a mapping of UTC -> local time as a function of time, and store this forever.
(For durations, we might need to have a mapping from TAI to UTC as a function of time, because a leap second messes with duration length. Smeared leap seconds are even worse in that regard)
You need to store it for certain applications, e.g. timetabling. if school starts at 9am, it starts at 9am local time, and if local time changes (DST, or more rarely a change in the time zone rules) then school start time runs with it. Or similarly, if a commuter train timetable has it stopping at a certain station at 7.05am, that’ll be local time in the local time zone.
So, to answer what I think you are saying - normally you divide the day into chunks (“periods”). At our children’s school, it is primary, there are only two notional periods a day (morning and afternoon), when our son goes to secondary school next year (which appears to use the same software) there will be several periods a day, one per a subject.
Anyway, in the system, a period is a class, not just in the academic sense, but also in the OO sense, and as such it has instances - “morning period” starts at 8.35 am local time any day the school is open. So that start time would be stored without a date, just a time plus time zone. But then, there is an instance of “morning period” every one of those days, which starts at a particular instant in time - today it starts at 2024-07-04T08:35+10:00. And yes, you could store that just in UTC, and convert to the school’s local time on display.
I suppose there are three main data types you really need: (1) date without time (2) local time in specific timezone (3) UTC instant
For (1), whether you need the timezone or not depends on the use case. For stuff like dates of births, you generally won’t know and don’t really need to know the timezone in which they were born. But, for other applications, it becomes important, since Wednesday afternoon in the Americas is Thursday morning in Oceania and eastern Asia, so whether it is Wednesday or Thursday depends on your timezone. People expect days to start and end at local midnight, not UTC midnight - which for me is 10 or 11 o’clock in the morning.
For hire dates, many jurisdictions have employment laws that have different rules depending on how long you’ve worked there. So you need to know how many days since hire to know what legal regulations apply to the employee. And obviously that is meant to be calculated in local time, if you do it in UTC or HQ timezone it could be a day out, which might cause legal issues. (e.g. in some jurisdictions it is easier to fire an employee in the first six months, you wait until the last day to terminate them, except because you got the day off by one, that was yesterday, now you have terminated them illegally)
Time zones are a curse on humanity.
The thing that's a curse on humanity is daylight saving time.
I have coworkers and friends in different timezones, and timezones do literally nothing but complicate coordination. Even if it's a small friction, they are 100% friction.
As far as I can tell, the main benefit of timezones is that it allows people within the same timezone to converse as if timezones don't exist. Which would also be the case if timezones didn't exist.
But there’s PT (pacific time), as well as PST and PDT (pacific standard and daylight savings time) if you need to be specific with whether or not daylight savings is happening.
EST, according to Google, is the east coast of America when daylight savings is not happening. For Sydney and Melbourne, you want AEST / AEDT / AET depending on if you want to specify Australian east coast standard, daylight savings or current time (which changes depending on the date).
It’s all hilariously exhausting to keep track of. Simple enough you think you can remember it, but complex enough you will miss your meeting even though you checked twice because the calendar was set to the wrong timezone and it didn’t matter until today. The relative time between Australia and California changes 4 times a year by 1 hour, depending on the local daylight savings time in both countries. I hate it.
Although AEST/AEDT are probably more common-in part to avoid confusion with American EST/EDT-people use EST/EDT for the Australian time zones too - random example: https://www.support.transport.qld.gov.au/qt/systemmaintenanc...
Some computer systems (probably designed by Americans) want timezones to have abbreviations but insist they can only have three letters, so in those systems the Australian timezones have three letters. I definitely remember seeing EST meaning UTC+10 on Unix systems before
edit: not trivializing missing the talk, i would be pissed.
So the timezone difference between US and Australia (or at least their DST using parts, since in both countries some states don’t observe DST) is 2 hours more in one half of the year than the other. And it changes four times. In January, Australia has DST and US doesn’t. Then in March US starts DST and moves one hour away. In April, Australia ends DST and moves another hour away (in the opposite direction). In October, Australia starts DST and moves an hour closer to US. In November, US ends DST and moves another hour closer to Australia.
Of course as soon as one of those assumptions is not 100% true, using UTC internally and local time for display is usually by far the better option (though there can still be difficulties there – time is never as easy as you'd think it should be).
The trouble comes when people use to working local only (timezone wise) slap together a PoC of something that might need timezone awareness, and don't fix the deficiency at any point as they progress through PoC->prototype->alpha->beta->v1. The longer you leave the change the harder it is to do, and it not being easy is why once something nominally reaches V1 (or often at first alpha release) such a fix is seldom ever made.
Timezone awareness has improved massively in recent years though. I think some cloud providers have accidentally helped there by defaulting to UTC (for instance all AzureSQL DBs default to UTC for everything, as do VMs and other things in Azure). Though here in the UK minor issues due to bad assumptions are still common when we transition back or forth between GMT and BST.
The way to avoid this pain is to just use UTC on day one, regardless of requirements. Hard and fast rule that everything needs to be UTC. Need to display it? Converting to local time is trivial.
Same thing for text. Use Unicode unless otherwise specified.
That can cause extra concerns for always-in-one-timezone systems where you might care where midnight falls (“did this event happen today or yesterday?” is a question that needs extra steps to answer, for instance). Nothing complicated, but extra work you might want to skip at the quick PoC stage.
But yes, beyond PoC work and all but the simplest other work, I'd agree with UTC all the way from the ground up.
Trying to make sense of them as purely time keeping artifacts is bound to have misunderstandings.
https://www.bbc.com/worklife/article/20170609-its-time-to-pu....
> The Spanish also go to sleep later than their European neighbours. According to Eurostat, Spaniards go to bed, on average, at midnight, compared to Germans at 10pm, the French at 10.30pm and Italians at 11pm.
What makes no sense is taking something useful, UTC, and redefining it out of existence. Then what time do you use if you really do care about drift? Do we invent a new UTC?
That would be great.
> Then what time do you use if you really do care about drift?
Nobody uses UTC because they want to know where the Sun is to the nearest second. People who actually need to care about variations in the Earth's rotation speed (e.g. astronomers) already need far fancier stuff than just UTC. People use UTC because someone else made a mistake and decided they should use UTC, like a government standard, or an operating system vendor, or whatever. Unfortunately the best way to correct all those millions of mistakes is to redefine UTC rather than convince everyone in the world to simultaneously switch to TAI.
> taking something useful, UTC, and redefining it out of existence
I question that UTC is useful. What utility does it have over TAI, outside of interoperability with other people who are using UTC? Again, anyone who actually needs to care about Earth rotation speed changes already needs to use something better than plain UTC, and my argument in my original comment is that drift that is small on a scale of a human lifetime is not an actual problem for anyone alive today or in the future.
Well whats stopping you?
Getting the world to switch timescales is orders of magnitude more difficult that redefining currently used timescale. The latter can be done in the BIPM backrooms by small committee, the former needs action and agreement from pretty much everyone.
Few people actually care about the precise earth rotation, and there are time bases for these people that are better than UTC anyways. Sunrise and sunset are important, but the middle of the day, not so much.
Why don't you care about that stopwatch's drift over the past 50 years, the tempreture related variation, the errors induced by motion, air pressure and humidity?
Would you prefer a count that's averaged over many from the same manufacturer, or from many over many manufacturers?
I just don't give a damn whether solar time is a couple minutes off of calendar time.
Seconds should be seconds. Solar time is a human construct, it shouldn't affect computers.
That's a mechanical device with multiple sources of error and a need to be wound regularly.
The questions I asked are about common sources of drift in mechanical watches and wether or not they cared enough to attempt to account for them.
The issues you bring up are only relevant to a particular set of devices that can be used as timepieces that measure the amount of time that elapses between its activation and deactivation-- and are are all stopwatches.
Answering your question more directly though, why would you want it in an OS? The OS primarily exists to mediate shared resources, and to a slightly lesser degree to sensibly wrap shared code everyone is definitely using (e.g., chrome and hacker news don't have to care about my LCD driver).
What exactly do you gain by baking TAI into the OS? You lose in update availability, OS install size, application-specific customizability, runtime performance of TAI function calls, .... You'd want to gain something for those costs.
> Why would I use a library for a timezone?
Most people do? Even libc localization isn't a part of the OS (and is fraught with issues; never use libc localization), and that's the most primitive timezone library most people use. Everything else is baked into their language runtime or a third-party like nodatime. TAI isn't special in that regard.
https://en.wikipedia.org/wiki/Geoid#/media/File:Geoid_undula...
Approximations are pretty easy until you get into the details, dammit.
Works if you don't mind a lab bench full of equipment, but doesn't appear to match the specification of the request.
Still, thanks for your input.
The HP 5061 was introduced in 1964, why do you think a counter that can reference the frequency standard is not possible?
You might want to be more rigorous about reading specifications.
> why do you think a counter that can reference the frequency standard is not possible?
How on earth did you strawman my thinking to reach that bogus conclusion?
You might want to be more rigorous about reading comments and projecting.
People make mistakes.
- _TAI_: Sampled average of ticks in the (very noninertial) frame of the surface an implicitly-defined idealized rigid Earth. Each tick is further a sampled approximation of our definition of a second, which invokes idealizations at absolute zero.
- _UT1_: Mostly the same frame as TAI. Each tick is considered a sample, trying to measure the "true rotation" of some idealized rigid Earth, module any geophysics. Note, the definition invokes quite sophisticated models of celestial mechanics, and explicitly ignores certain kinds of "high frequency perturbations".
- _UTC_: Based on UT1, but take into account some basic, empirically-measured geophysical processes.
Depending on the particular physical processes, models, and sampling methods you choose, the quantity you get for "seconds since 1970" will be different, and there will be (complicated) relationships for how each of those processes transform tick counts between each other. In some cases, the transformations will be a priori impossible, only permitting approximations under simplifying assumptions.
IMO, the remarkable thing is that the various ticks all line up as well as they do, which is why we can mostly get away with treating all these as a single unified Ticking Time concept. On the other hand, I also think the various standards do a reasonable job delineating messy reality into potentially useful tick-producing processes and the systems needed to make those practically useful.
As a software engineer, the lesson for me is that I can't always ask "what time is it?", "how long has it been?", or "which came first?" Instead, we I may need to shift focus onto different invariants in the problem I'm trying to solve.
If we were using TAI instead of UTC we very easily could.
The TAI would be a sincronization target, just like UTC. Any device can have its own high precision clock and periodically sinchronize with other clocks (just like NTP does) which to my knowledge is how most distributed systems work
> TAI doesn't even make sense on timescales that exceed Earth's lifetime
I disagree, 10^100 seconds in the future is perfectly valid time in any time system
> It doesn't encode enough information to answer questions about elapsed time of even ideal clocks at real, physical locations to arbitrary precision.
Those clocks would use the TAI to avoid drifting from each other.
Professor Einstein would like a word...
They do not.
> Depending on the particular physical processes
They have chosen their particular physical process: "as if someone stood with a stopwatch"
It's not well-enough defined for all purposes.
Where is that person standing? Earth's surface is shifting and moving all over the place willy-nilly, so how do you define that particular location in the first place? Over what timescales is that definition valid? What physical process do you mean by stopwatch? What particular synchronization protocols do you define? What physical models do your definitions invoke?
Answers to these kind of questions will generally generate mutually-disagreeing time standards. I mean, with suitable transformation rules, they'll often agree up to some precision limit, but if you need anything beyond that, you've now gotta choose the one(s) that best correlate with the natural ticks in your problem domain.
Also, any standard like "someone standing with a stopwatch" is forced to just a fiat declare the stopwatch as the Definition of Time, a la the platinum sphere kilogram. We all know how great that was. Do you now want a team of canonical stopwatches and some aggregation process? How do you deal with measurable drift between the tick rates?
But negative leap second shouldn't be so noticeable since it will just be a jump from 31 dec 23:58 to 1 jan 00:00.
Facebook made a lot of FUD about a negative leap second but I don't think it should be THAT a concern.
The problem time keeping is meant to solve is coordination of physical movements between separated parties that have exceptionally limited and delayed communication capabilities.
All of the time issues we deal with today are in service of keeping these important properties of our civil time in tact.
In terms of computers we should happily ignore the problems of civil time, simply just use TAI, and use a provided database to convert this into "human display time" whenever necessary.
Also, astronomers care about leap seconds, and astronomy is an important tool in making observations which lead to predictions about our universe, so I wouldn't be so cavalier in labeling it an "obscure curiosity."
That's because their telescopes are bolted to earth. So when the rotation rate changes the time at which certain features appear in the telescope also change.
They might consider using a higher precision version and make smaller updates more often but since the rate is not perfectly predictable and can be altered by local events on earth and you want all astronomers on earth to agree to the current value the second is the most reasonable unit of change for them to use.
Leap seconds are comparatively simple problem...
No queries came in during a whole second? Who cares?
Compare this to the problem of there being a unix timestamp integer that refers to two different times, which is what naive positive leap second implementations resulted in. Real time spans being impossibly negative.
Positive leap seconds are orders of magnitude worse.
Unless you're writing time keeping software, negative leap seconds can be ignored and will be "just fine". And if you are writing time keeping or benchmarking software, then this is just another thing to not have a misconception about.
> Generally UNIX time -> UTC conversion is considered to be infallible
And generally there's two seconds between N and N+2.
Similar things apply with a positive leap second, although in that case the result is not an error. However, for positive leap seconds, when you are converting a UTC time in parts, into the UNIX timestamp, there is an additional time 23:59:60 for some dates. This can still be converted, although if you have a separate field for nanoseconds (or other divisions of a second) in the original UTC timestamp, then you will have to either subtract one second from the subsecond divisions, or change :60 to :59 and add one second.
And, then again, you will also have to consider the leap second when dealing with SI seconds, whether they are positive or negative leap seconds. You will get the wrong answer if you fail to consider leap seconds either way (although in the case of negative leap seconds, there might not be a "right answer" in some cases).
But, even if you do not consider these things, the timestamps will not be off by more than one second in either direction; this is not normally a problem, although in some cases it might be (which are the cases when it will be important to deal with leap seconds properly).
Maybe we should do it every 6 months no matter what. That way, everyone would know that on June 30 and December 31, there would be smeared seconds.
That makes the "do it rarely and people will screw it up" problem go away.
Aside from the fact that just by ignoring the problem it'll probably go away - we've just had a period of adding a load of leap seconds to roughly compensating for the Earth rotating "too slowly" overall, which for a system that's inherently irregular and actually speeds up and slows down all the time, and just happens to have been "too slowly" ON AVERAGE. It now seems hat the rotation speed, on average, is "too fast" and now we need to undo some of that adjustment. If we'd just left it alone, it'd have been fine.
And for all the hassle these leap seconds cause, what exactly has been the benefit? Since 1972, we've had 27 seconds added. 27 WHOLE SECONDS. That would have made absolutely no difference to anybody's life if the sunrise and sunset was 27 seconds later.
Just think, in 100 years, when your great grandchildren are enjoying their life, we might be a whole minute wrong. And if you went the other direction, back as far as all of recorded human history, and we might be out by an hour. Many of us are routinely forced to have our clocks out by an hour for "daylight savings time" every year anyway.
For the very few cases where it might be useful to know the ACTUAL difference between the rotation and alignment of an abritrary fixed point on earth and an arbitrary fixed point on the sub, then THEY can use they own clock for that specialised purpose. And maintain a fixed adjustment to the atomic clock based time that everyone else uses, and they don't even have to round to a whole second for that - they can say NASA time is TAI+1.234s for instance and the only people who need know or care is NASA themselves.
If you care about the position of the sun to second precision, then time alone tells you nothing - you need location too. Ironically, most people get their location these days using GPS which is based on TAI, not UTC with leap seconds.
If you don’t care about the position of the sun to second precision then leap seconds are a nuisance.
There’s zero reason to have leap seconds in the definition of time. It should be a database, like tz, that software that wants Earth rotation updates for calculation of sun position can download and incorporate into its calculations for increased accuracy.
The fundamental problem with leap seconds is that you can’t predict what the leap second delta between UTC (legal time) and TAI (absolute time) will be in the future. That’s unacceptable, I should be able to know how many seconds there are between 1 July 2024 and 1 July 2026 without needing to wait for a Time Lord to determine it.
They're not to blame - Davros is responsible for the leap seconds.
Unless you are using time and (celestial) observations to determine your location. It's not coincidence that US Naval Observatory (and its peers) is one of the key origins of UTC.
Speak for yourself! Part of good engineering is knowing when over-engineered perfection is useless. The inventors of UTC/leap seconds did not, and made a serious mistake the rest of us have had to live with.
> Since 1972, we've had 27 seconds added. 27 WHOLE SECONDS. That would have made absolutely no difference to anybody's life if the sunrise and sunset was 27 seconds later.
The delta increases quadratically though due to Earth’s rotation slowing down in the long term, so the problem will only get more severe.
Can you name some of these systems?
Leap seconds are trying to ensure that, standing at the point we arbitrarily picked for UTC at noon, the sun will be at its peak.
That doesn't feel like something that NEEDS sub-second precision. I mean - by walking from one side of a time zone to the other, you can have an hour or more of imprecision. https://i.ibb.co/r000S5s/caxxyddlsgp11.jpg
> Is there a single system currently in existence which can handle a leap minute?
Every system can handle a leap-hour, they happen twice a year in many countries, courtesy of DST.
So let's just wait until there's 15 minutes of error - a couple of millenia from now. Then move the UTC prime meridian about 3.75 degrees / 400km, fixing the error while keeping UTC monotonically increasing 1 second per second.
And to update our clocks, use the one mechanism that we already rely on comfortably - timezones. Add 15 minutes to each timezone, the same way we handle DST now.
It does not need to be sub-second, but sub-seconds add up over time.
The Julian calendar wasn't that far off, but it was cumulative, so to fix things it took a ten day jump:
* https://en.wikipedia.org/wiki/Gregorian_calendar
It is considered easier to have a few small(er) semi-regular jumps than suddenly having to making giant, sudden, one-off jump in the future.
UTC changing does not happen all the time: what the 'all the time' thing is a display/UI offeset against UTC. Time itself is not changed in those situations, but would be with a 'UTC jump'.
The closest thing we get to universal time 'changing' are leap years and February 29, and we regularly get stories about people messing that up. And if something as common and regular as that gets fumbled I have little hope for pulling off a one-off jump of UTC.
It is coördinating the universal part that I would imagine to be the difficult part.
Really, this is a non-problem.
I'd rather build gradual adjustments into our systems so that they have to be resilient to this sort of thing. Sure, leaving it up to our distant descendants is enticing, but then they're going to have a Y2K-sized problem to fix. I'd rather leave a legacy of systems that were designed to be resilient.
No they aren't. tzdb updates happen several times a year already, and you don't notice.
(assuming our grand-grand-grand-grand-etc-children still have something like a tzdb)
Just like we have to focus on making application secure, we need to ensure our applications don't shit their pants when applications change time in unexpected ways.
TZ changes are a display delta against UTC. The closest thing we get to universal time 'changing' are leap years and February 29, and we regularly get stories about people messing that up. If something as common as February 29 is fumbled, what are the odds of pulling a one-off event?
And as someone who was a sysadmin when the US changed its TZ rules many moons ago, the sudden rule change was anything but straight-forward given the fairly static nature that they had been for the long history of software development that had occurred until that point: a lot of software is US-developed, and there was little/zero consideration to updating rules. (Though I think that event caused a lot of developers to be more understanding.)
It took about 400 years for all countries to finally go from Julian to Gregorian. :)
Not if you want to interoperate with basically any other system that uses time.
Leap seconds are absolutely part of the local datetime conversion functionality of time and not part of the counting functionality. Unix time should never have been different from TAI time in the first place but somewhere along the line some idiot made the absolute undeniable blunder of putting the seconds offset of the suns position relative to earth in the counter of seconds rather than putting it in function that formats the local datetime.
Anyway here we are. It’s totally fair that UTC includes leap seconds but unfortunately they passed on UTC as is into the unixtime seconds counter which makes no sense since computers don’t print UTC datetime directly anyway, they have a special date() formatting functions that could easily and problem free add seconds as needed.
So what’s the solution? Well telling everyone to suddenly adopt TAI is difficult. But we could just abolish leap seconds from UTC effectively migrating everyone without them realising it. Now I know that sounds odd. After all this blunder wasn’t done by the UTC creators, it was the Unix guys that fucked up here. UTC is literally meant to track the sun after all. But still it’s the easiest fix and we can always create a new UTC, UTC_leap_seconds, that we can use in our datetime printing functions if we want. Abolishing leap seconds from UTC and thus Unixtime will make the world run more smoothly. It’s also been planned as we speak so you don’t need to do anything but wait! :)
But, once again: that already exists!
Shifting the prime meridian 400km seems like it'd have some unintended consequences akin to the Y2K problem. Its physical location is used as a reference point in some coordinate systems, ECEF for example.
That sounds like a headache of epic proportions when contrasted with the alternative of a leap seconds.
This actually happened last March[0] and I'm not aware of anyone outside Kazakhstan being inconvenienced by it at all. tzdb updates happen several times a year and everything keeps working fine.
Perhaps a couple of millenia down the road, we have the technology to speed up/down the rotation of the Earth in order to keep it synced with UTC.
To be clear, for decades, we've not sought sub-second precision. We've sought precision of a second. Maybe we should seek more-- the big problem with leap seconds is that they've been rare enough to not be tested completely well, but common enough to provoke problems.
> And to update our clocks, use the one mechanism that we already rely on comfortably - timezones. Add 15 minutes to each timezone, the same way we handle DST now.
Then everything in the world that displays time is going to need a software update when you're going to do this, or it will show time off by 15 minutes.
I do think it could be sane to let UTC's offset to drift up to +3.0 to address this temporary trend of a "fast Earth". Of course, that's going to make leap seconds rare for awhile, and it will come back and bite us when the recently-untested event of a positive leap second happens again.
Have you seen how often tzdata updates are published? There tend to be multiple each year. Stuff already should be auto updating this.
(Python, but good reference: https://github.com/python/tzdata/releases )
Heck, every existing radio-derived clock that presents time to users would be obsolete (lots of GPS, WWV/DCF watches, etc).
A lot of them have TZ-related breakage, but they usually let you set a manual hour offset or consume a daylight flag.
In any case, I think frequent, well-tested procedures are better than an exceptional procedure on the order of decades or centuries.
This might already happen in 200-300 years, because the delta increases quadratically: https://www.ucolick.org/~sla/leapsecs/deltat.html
We'd have two prime meridians. One for time and another for longitude. (Not a deal breaker but we'd have to plan for it.)
We also moved the traditional prime meridian by a hundred meters back in 1974 with the adoption of the IERS Reference Meridian.
*Though on a small scale, tectonic movements fuzz the idea of "location" in general.
Any DST aware system I've checked handles this as a timezone change rather than changing the underlying time, expressed in UTC. Even for locations where DST's onset removes an hour rather than adding one.
https://gavinhoward.com/2023/02/make-the-leap-second-first-c...
Different time standards exist. International Atomic Time (TAI) is a strictly monotic clock based on the average of multiple atomic clocks throughout the world. UTC is defined as a tweaking of TAI to approximate UT1, a rough definition of which would be the historic mean solar time at longitude 0. UTC shifts TAI to be within one second of UT1. That's both its definition and the whole point of it even exising. I repeat, it UTC didn't track solar time, it would have no reason whatsoever to even exist.
If you want TAI, just use that. Stop complaining that UTC is not it.
The only things we should care about are local time (both as in clock time and in sun-position time) and universal timestamps to coordinate between local times and to use as canonical representations; leap seconds are useless or damaging to both of these.
We already easily accept over 60 minutes of offset between clock time and sun-position time and I do not think that 3600 seconds can be ok but 3601 or 3599 cannot.
It is also perfectly easy for a country to change its timezones or to adapt a non-whole-hour timezone (multiple countries have 15 or 30 minutes offsets)
So my proposal is that the IERS never announces any leap anything ever again and in a few centuries some countries will shift their timezone by 15 or 30 minutes.
This will be much more easily compatible with all current systems (we can right now create the timezone CETP and CETM for central European time plus 15 and minus 15) and in 400 years each at they leisure Europe/Berlin will be equivalent to CETM* (and CESTM if have failed to remove DST).
No need to handle irregular minutes or hours, no fractional seconds offsets, no need to account for edge cases that are far too easy to ignore, just keep writing code that work with timezones (as you should already be doing) do not hardcode conversion between regional timezones (Europe/Berlin) and offsets (CET, CEST, CETM) as you should already be doing just updating your timezone tables (as you should already be doing as they change often).
Just store one of:
- UTC and timezone
- dateless time with/without timezone
- date with no time with/without timezone
And you are ok, already prepared for the next few centuries of no leap seconds.
* I don't know whether it would be CETM (that is +00:45) or CETP (+01:15).
Few people or systems even interact with a clock that is precise enough to detect the difference, and instead rely on clocks that are 100's or even 1000's of times less accurate than what would be required to even detect a leap second. And these clocks undergo thousands of corrections between leap seconds without a complaint.
We add an entire day to the calendar about every 4 years. It's not a problem because everyone is aware of it. So I think the only real thing that needs to change about leap seconds is awareness of them.
Leap seconds are needed in civil time because people's schedules are still dominated by the Sun. Not to the second, maybe not even to the hour. But the leap second was chosen because it's large enough to be infrequent, and tiny enough that only a tiny fraction of systems will notice.
Unixtime made a blunder of incorporating the leap seconds all those years ago so it’s not atomic hence the issue.
Now you might think it’s crazy that Unix time incorporated non-atomic leap seconds when it doesn’t incorporate any other part of the local sun relative position in its counter and you’d be 100% right in that. Leap seconds absolutely belong in the local time printing functions and nowhere else. But the blunder was made and now computers by default don’t have atomic time and here we are.
I'd love to snap my fingers and make everyone use TAI, but unfortunately I'm stuck with UTC, GPS and TAI depending on sources.
Sometimes multiple hours off. China has a single time zone.
As far as I can tell, people care very little about the clocks being in sync with the sun. We've effectively run massive experiments demonstrating this.
The existence of China as an outlier isn't much evidence. The most people on one side do not keep the same schedule as those on the other. If daylight really weren't an issue, there would be only one timezone for the entire Earth. So you can't point to one while ignoring the reason all of the others exist.
Any time that is supposed to represent both a wall-clock time has to deal with it.
There is currently no known answer to the question "What is the interval between the unix time stamps @1720026000 and @182002600" However, there is a well defined answer to "What is the UTC time for @182002600" and thus also "what is the local time in a timezone with UTC offset X for @182002600"
If we were to redefine Unix time to use Atomic Time, then we could answer the first question but not the second.
> We add an entire day to the calendar about every 4 years. It's not a problem because everyone is aware of it. So I think the only real thing that needs to change about leap seconds is awareness of them.
It's not just "about every 4 years" it's precisely "every 4 years, except for centuries, except for quad-centuries." If you give me any year, I can tell you if it will be a leap year. I can program a non-internet connected device and it will get leap years right indefinitely. Leap seconds are not predictable, but rather determined empirically and announced about 6 months ahead of time. Any device that wants to properly account for them needs to be updated at least every 6 months to be correct.
It isn't just a code problem. Since leap seconds aren't predictable, you have to somehow distribute information about new leap seconds to everything, which is especially difficult for devices that aren't connected to the Internet.
And as the article mentioned, there are multiple ways to deal with leap seconds, and there are tradeoffs involved in choosing which one, and depending on the circumstance different ways. And since different methods work best in different situations, that can result in inconsistencies between different systems.