Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance the date and time precision in Add methods #73198

Merged
merged 3 commits into from
Aug 4, 2022

Conversation

tarekgh
Copy link
Member

@tarekgh tarekgh commented Aug 2, 2022

Fixes #66815

@dotnet-issue-labeler
Copy link

I couldn't figure out the best area label to add to this PR. If you have write-permissions please help me learn by adding exactly one area label.

@ghost ghost assigned tarekgh Aug 2, 2022
@tarekgh tarekgh requested a review from tannergooding August 2, 2022 00:28
@tarekgh tarekgh added this to the 7.0.0 milestone Aug 2, 2022
@ghost
Copy link

ghost commented Aug 2, 2022

Tagging subscribers to this area: @dotnet/area-system-runtime
See info in area-owners.md if you want to be subscribed.

Issue Details

Fixes #66815

Author: tarekgh
Assignees: tarekgh
Labels:

area-System.Runtime

Milestone: -

@tarekgh
Copy link
Member Author

tarekgh commented Aug 2, 2022

CC @mattjohnsonpint

public DateTime AddDays(double value)
{
return Add(value, MillisPerDay);
return AddTicks(ticks);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why this is needed, vs AddTicks((long) (value * TicksPerDay))

Is there a perf benefit or something? Or some edge case with floating-point math?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the later some edge case with floating-point math. Can have precision loss with some values. @tannergooding can explain more details about that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting. If that's true, then I guess that still DateTime.AddUnits(n) != DateTime.Add(TimeSpan.FromUnits(n)) for some units and some value of n?

Copy link
Member Author

@tarekgh tarekgh Aug 2, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, looks so. This can happen today too, I guess. We can track fixing in TimeSpan later. I wanted to get this fix now for .NET 7.0 and then we can fix any more issues as needed in the next releases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. double can only represent integrals exactly up to 2^53 anything higher than this will definitely have rounding error. Since there are approximately 31,540,000,000,000,000 nanoseconds per year, this is approximately 315,400,000,000,000 ticks per year. 2^53 / TicksPerYear gives you approximately 28.54 years (or 10,425 days) before definite rounding error starts creeping in.

This ends up with two very visible edge-case considerations:

  1. If the user inputs a double value that is already greater than 2^53 there is guaranteed loss of precision
  2. If the user inputs a double value where (value * TicksPerUnit) > 2^53 there is guaranteed loss of precision

There is then a third, but less visible, consideration that these two rules applies mainly to the integral portion. The fractional portion is much tricker to understand since almost anything the user inputs has some loss of data due to double only being able to represent multiples of power of 2.

What this means is that at 2^52 to 2^53, double can represent no fractional data (that is it can only represent multiples of 1). At 2^51 to 2^52, double can only represent multiples of 0.5, at 2^50 to 2^51, double can only represent multiples of 0.25, then 0.125, then 0.0625, and so on doubling in precision every smaller "power of two" down to double.Epsilon at Zero. This likewise halves the precision every larger "power of two" (2^53 to 2^54 can only represent multiples of 2, then 4, then 8, and so on).

This boils down to: the closer (value * TicksPerUnit) is to 2^53, the more loss of data compared to the input of value you will observe. Splitting it into an integral and fractional portion helps reduce the overall rounding error by ensuring that the integral portion is handled and then the fractional portion by itself, which allows the most accuracy when it is scaled up.

=====================

The current algorithm (which is (long)((value * MillisPerUnit) + Adjustment) * MillisecondsPerTick) tries to minimize rounding error by computing a double that is MillisPerUnit rather than one that is TicksPerUnit. This attempts to guarantee millisecond accuracy (but not microsecond or nanosecond) and broadens the range significantly to some 28k years.

Replacing the algorithm with purely x * TicksPerDay will end up broaching 2^53 much sooner and will cause the fractional part to no longer be considered (resulting in the result being off in various edge cases).

A purely correct, tick accurate, approach would be more complex and likely cost too much for perf compared to the current algorithm.

However, splitting it into integer and fractional portions will likewise end up broaching 2^53 much sooner, but will in turn allow for the fractional part to be correctly considered. It likewise maintains millisecond accuracy. This is because, given the upper bound of integer accuracy: 9,007,199,254,740,992 millisecond accuracy involves ignoring the lowest 4 digits: 9,007,199,254,740,000 and we have a maximum error of only 1024 ticks at 2^63. -- This is within 102.4 microseconds which means that we can get up to tick accuracy, but no worse than millisecond accuracy.

=====================

As an aside, when considering #66815, we must first consider that var seconds = 0.9999999; is actually seconds = 0.99999990000000005263558477963670156896114349365234375 (as that is the nearest representable double to 0.9999999), it's not going to impact this scenario much but it is important to consider that values aren't always as "exact" as it might appear and so results may be "off" in other contexts regardless.

@tarekgh tarekgh merged commit 0c413d0 into dotnet:main Aug 4, 2022
@danmoseley
Copy link
Member

Do we need a change to tune the remarks in the docs? The original issue mentioned them. Or is it now matching the docs?

@tarekgh
Copy link
Member Author

tarekgh commented Aug 10, 2022

@danmoseley I have logged the doc issue dotnet/dotnet-api-docs#8292 to track updating the docs.

@ghost ghost locked as resolved and limited conversation to collaborators Sep 10, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

AddSeconds and FromSeconds rounding issues
5 participants