-
Notifications
You must be signed in to change notification settings - Fork 3.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Decide about future of sdk.Dec #7773
Comments
Maybe @ValarDragon and/or @rigelrozanski could weigh in here. |
ref: #4399 (for arbitrary precision on arithmetic yielding a fixed precision result). This is needed. |
Ultimately I stand by earlier thoughts in that we just need to introduce arbitrary precision decimal types, which are clearly and explicitly defined at the instantiation of the decimal type - aka we'd have a decimal interface which would be instantiated with a decimal struct which was created with "X" level of precision. Over and out GL. Thanks for pinging my @ebuchman - also yeah @ValarDragon has a great grasp on this issue and has worked on the decimal type. |
So what's wrong with using
Why would
So something like a proper |
I think this is exactly what we want. |
It is true that Based on that, why not |
Glad this is getting looked into again! Totally agreed that there wasn't enough analysis in #1807, viewing the exact error models / analysis into tracking the errors. (Though in defense of that issue and the subsequent work, it seems clear to me that it only had significant benefits over rational, in both error handling and performance) However, I also believe that #7772 is presenting an oversimplification of what is going on and the guarantees you get. Precision handling is super tricky! A great reference is to see some worked out examples for floats, which is on https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accuracy_problems I'd like to give some context on the error models, at least so we have some nice shared state. (The conclusions in #7772 didn't seem right to me) Floats don't simplify that much with regard to errors, they should give you only more fundamental error causes than Decimal. None of Decimal, Float, or Rat will guarantee that either +1 to Also if you plan on using normal floats (base 2), you get incredibly weird situations for 'normal' decimals. The problems you are experiencing won't go away no matter which data type you choose, at best you can do things to track what is the error being maintained. Every individual floating point and decimal op has isolated errors you can think about (much easier in the decimal case), and then you can track to what degree these compound as you do more of your computation. I am personally still in favor of Decimals as default, as you strictly remove the complications of error due to the sliding exponent, and at least get associativity for addition. I also think that we should build code to track the building up error. |
The key component of this can be phrased a bit differently. "If we want 18 places of final correct precision, what if we actually compute over 36 decimal places of precision, and then truncate". I think this is great, and exactly what should be done! IIRC, we actually had a long conversation about this at one point, and marked it #postlaunch. (The idea was to separate 'saved' data types, and 'computation' data types) I'm having trouble finding it on github, maybe it was an in person convo. However I push back on using 36 decimal places within rationals, for reasons alluded to above for its error model being very weird. (Even weirder than floats!) I believe we should be using decimals still, as it gives the most properties. However, I haven't really justified the 36 decimal places of precision. And actually, you can't justify that number for generic computation. You need more decimal places for the more operations you do. It fundamentally has to be white boxed (or tracked with taint analysis/proper error analysis techniques) per region of the code. Happy to brainstorm on designs for implementing this well. We definitely should make our decimals able to gracefully have differing precisions (or use a library that does) |
Thanks for chiming in and sharing all those details @ValarDragon! So I'm currently thinking that it makes sense to align first and foremost on the General Decimal Arithmetic (GDA) specification which aligns with IEEE-754 2008 and has multiple implementations. GDA specifies how operations should work for both fixed and arbitrary position decimal arithmetic. In terms of choosing a specific precision within GDA, https://en.wikipedia.org/wiki/Decimal128_floating-point_format which specifies 34 digits of precision seems like a logical choice. What are your thoughts on taking that approach? If we are able to agree that GDA is the correct specification to use, and possibly that |
I don't agree that we want decimal-based floating points. I fail to see how the variable exponent helps us with any problem we have, it should only introduce more problems AFAICT |
Well GDA does give us a specification to work with that's gone through robust review and lots of usage. So we have something to base off of for correctness. Also, I think what you're suggesting would basically be GDA with Emin and Emax both fixed to -18 or -34 and arbitrary coefficient size. |
Just chiming in, the issue here is that fundamentally commutativity isn’t
working so a wide range of arithmetic with the sdk.Dec type and I observed
that it happens with any fraction that has precision of 18+ decimal points
in order to successfully be represented eg (12/11 * 11) != (12*11 / 11)
which are small values and there is a very wide range that aren’t accurate.
Which brings me to another question: why did we choose big.Int as the
underlying/intermediate type?
…On Wed, Nov 4, 2020 at 5:15 PM Aaron Craelius ***@***.***> wrote:
I don't agree that we want decimal-based floating points. I fail to see
how the variable exponent helps us with any problem we have, it should only
introduce more problems AFAICT
Well GDA does give us a specification to work with that's gone through
robust review and lots of usage. So we have something to base off of for
correctness.
Also, I think what you're suggesting would basically be GDA with Emin and
Emax both fixed to -18 or -34 and arbitrary coefficient size.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7773 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFL3VZ66XLRXTSTHXAAWY3SOH4EPANCNFSM4TG4A4OA>
.
|
I would just note @odeke-em, that commutativity fundamentally can't be guaranteed as @ValarDragon has pointed out. But, the issue is moreover that it actually never works right even for the easy cases... I think the underlying problem is more related to rounding behavior than |
Thanks Aaron! Ideally it should though for most cases with big number
libraries.
Yeah rounding behaviour came up too and there was a particular case in
which the proper digits could be represented in 19 decimal points but we
rounded down in (12/7 * 7) which this produced 12.0000000000000002 or so
(typing from my phone and not close to my machine for a proper citation)
Let me examine how apd handles this.
…On Thu, Nov 5, 2020 at 9:12 AM Aaron Craelius ***@***.***> wrote:
I would just note @odeke-em <https://github.com/odeke-em>, that
commutativity fundamentally can't be guaranteed as @ValarDragon
<https://github.com/ValarDragon> has pointed out. But, the issue is
moreover that it actually never works right even for the easy cases... I
think the underlying problem is more related to rounding behavior than
big.Int which apd uses as well.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7773 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFL3V3OW33H3RQ2OH3UBKDSOLMIVANCNFSM4TG4A4OA>
.
|
In general I would like to better understand all the uses of decimals in the SDK - particularly where division and multiplication are used and rounding happens. I guess I can review that all myself, but ideally we have a summarized list that outlines goals and potential problems. |
The core areas where this happens is distribution & slashing. |
Aaron, I started compiling that list and should have a report by end of the
day or early tomorrow, but was also trying to understand the entire problem
and root causes.
…On Thu, Nov 5, 2020 at 9:31 AM Aleksandr Bezobchuk ***@***.***> wrote:
The core areas where this happens is distribution & slashing.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#7773 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABFL3V7AH2U46PMLC5ZR6KLSOLOP3ANCNFSM4TG4A4OA>
.
|
I was thinking about it independently when @aaronc mentioned about using arbitrary decimal numbers.
Looking from an experience with other blockchains, I see that using integers is the lowest hanging fruit. However this is not very user friendly. Because we will end up with big numbers with different numerical denomination (eg some denominated in In Ethereum, 18 decimals became kind of a standard. Even some token standards require that all tokens have 18 fractional decimal digits. With all that in mind, the solution I would see that it's both easy and flexible is:
If we choose |
I've checked apd. Their operational model is similar to the one described above: each value is represented as a pair |
Alright, so firstly sorry for the delays, finally finished contract logistics at the end of last week, but now we are good for a near-long term. As for the investigation, here goes TL;DR: let's use decimal128 or just not perform truncations with sdk.Dec (which does so much work for every API call) yet has low precision and quickly introduces a high error rate: 18 digits of precision quickly run out as soon as you have say 10.XXXXXXXXX; even just BackgroundThe usages of sdk.Dec heavily invoke the (Dec).Quo* methods. For every invocation of (Dec).Quo, there is a precision check and rounding that happens, to fit results in 18 digits of precision. The usage of Dec.Quo happens for a denominator of a usually smaller value (e.g. staking amount, a small portion of all shares), and the denominator of the larger value (e.g. total shares), thus we'll usually be dealing with values less than 1. For example with this transaction Fee 600uatom (0.0006ATOM) delegated 1.5ATOM to Binance, and total ownership was 32.4994ATOM. Calculating the rewards resulted in a 0.XXXXX result Some sites
x/staking/types:
x/slashing/simulation:
x/staking/keeper:
x/mint/types:
x/distribution/keeper:
Cons
Pros of keeping this libraryTruncation (from (sdk.Dec).QuoTruncate) is used as a good measure to ensure withdrawals of rewards, stakes, don't consume more values than owed. SuggestionsAs we've seen in this issue, suggestions to use decimal128 (it has 34 decimal places) by @aaronc hold: I did calculations with 30 "digits of precision" and a calculation like |
Having a fixed precision is effectively what we are discussing above. If we will choose a fixed precision for both fractional part and the natural part, then, as suggested in my comment above we will need to have clear guidance how to do math and have int256 type (or falling back to |
Just saw this when investigating what was in v0.40.1. That said, I do think it is very important and difficult to get a good fixed point representation that does everything you want to do with floats, but without non-determinism. Just shows a lot of thought went into floating point ops. |
|
@alessio - IMHO, we should make a decision, rather than postponing breaking changes. This issue is very old already. |
In order to take this into consideration, a relevant ADR needs to be submitted to the ADR Committee |
We firstly want to discuss potential solutions and once we get opinions we can do an ADR. |
I remain strongly in favor of using Decimal over floats/rationals for reasons laid out in #7773 (comment) I think that most of the problems stem from using the same number of decimals for computation as we do for data storage. Whereas instead we should fix some number of significant figures we want to use in all our 'results'/state writes, and use more precision in intermediate computation steps. |
I tend to agree especially with
It is also less work for the decentralised SDK team. Thus +1 on that. |
Using GDA as a basis would give us flexibility to choose the desired decimal precision for each calculation and each stored value. Generally choosing decimal128 is likely a good default if we had to choose one across the board and might allow for optimizations in the future (there is an optimized Intel library and someday I imagine it will end up in hardware). But again GDA gives us the flexibility to choose decimal128 (34 digits), decimal64 (18 digits) or whatever precision we choose for a given case. It's been around for a while, is well specified, covers edge cases, comes with a test suite, etc. |
Exactly, what implementation are you referring to? |
apd aims to (and does AFAIK) implement GDA correctly in golang. There are other options in C. If we decided on using decimal128 precision specifically, we could maybe use this optimized Intel library: https://software.intel.com/content/www/us/en/develop/articles/intel-decimal-floating-point-math-library.html. |
<!-- The default pull request template is for types feat, fix, or refactor. For other templates, add one of the following parameters to the url: - template=docs.md - template=other.md --> ## Description ref: #7633 <!-- Add a description of the changes that this PR introduces and the files that are the most critical to review. --> Following up on #9089, this PR is the first step towards the migration of [x/group](https://github.com/regen-network/regen-ledger/tree/master/x/group) into the SDK. It introduces the group module proto definitions (types, tx, query) and other types. The rest of the code (module, server, client, genesis...) is dependent on various other discussions (#9238, #9182, #9237, #7773) and will be added in follow-up PRs incrementally. --- ### Author Checklist *All items are required. Please add a note to the item if the item is not applicable and please add links to any relevant follow up issues.* I have... - [x] included the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [x] added `!` to the type prefix if API or client breaking change - [x] targeted the correct branch (see [PR Targeting](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#pr-targeting)) - [x] provided a link to the relevant issue or specification - [x] followed the guidelines for [building modules](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules) - [x] included the necessary unit and integration [tests](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#testing) - [ ] added a changelog entry to `CHANGELOG.md` - [x] included comments for [documenting Go code](https://blog.golang.org/godoc) - [x] updated the relevant documentation or specification - [x] reviewed "Files changed" and left comments if necessary - [x] confirmed all CI checks have passed ### Reviewers Checklist *All items are required. Please add a note if the item is not applicable and please add your handle next to the items reviewed if you only reviewed selected items.* I have... - [ ] confirmed the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [ ] confirmed `!` in the type prefix if API or client breaking change - [ ] confirmed all author checklist items have been addressed - [ ] reviewed state machine logic - [ ] reviewed API design and naming - [ ] reviewed documentation is accurate - [ ] reviewed tests and test coverage - [ ] manually tested (if applicable)
Having done a quick review of results for libraries that implement decimal support,
|
Just an update on this. @ValarDragon and I spoke a few weeks ago and agreed that a GDA based solution - probably one that uses IEEE Decimal128 as it's default precision - makes sense. In addition to https://github.com/cockroachdb/apd, looks like a high-quality maintained implementation https://github.com/ericlagergren/decimal. It might be worth tackling this together with #10863 and the API module work (#10731). |
<!-- The default pull request template is for types feat, fix, or refactor. For other templates, add one of the following parameters to the url: - template=docs.md - template=other.md --> ## Description ref: cosmos/cosmos-sdk#7633 <!-- Add a description of the changes that this PR introduces and the files that are the most critical to review. --> Following up on cosmos/cosmos-sdk#9089, this PR is the first step towards the migration of [x/group](https://github.com/regen-network/regen-ledger/tree/master/x/group) into the SDK. It introduces the group module proto definitions (types, tx, query) and other types. The rest of the code (module, server, client, genesis...) is dependent on various other discussions (cosmos/cosmos-sdk#9238, cosmos/cosmos-sdk#9182, cosmos/cosmos-sdk#9237, cosmos/cosmos-sdk#7773) and will be added in follow-up PRs incrementally. --- ### Author Checklist *All items are required. Please add a note to the item if the item is not applicable and please add links to any relevant follow up issues.* I have... - [x] included the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [x] added `!` to the type prefix if API or client breaking change - [x] targeted the correct branch (see [PR Targeting](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#pr-targeting)) - [x] provided a link to the relevant issue or specification - [x] followed the guidelines for [building modules](https://github.com/cosmos/cosmos-sdk/blob/master/docs/building-modules) - [x] included the necessary unit and integration [tests](https://github.com/cosmos/cosmos-sdk/blob/master/CONTRIBUTING.md#testing) - [ ] added a changelog entry to `CHANGELOG.md` - [x] included comments for [documenting Go code](https://blog.golang.org/godoc) - [x] updated the relevant documentation or specification - [x] reviewed "Files changed" and left comments if necessary - [x] confirmed all CI checks have passed ### Reviewers Checklist *All items are required. Please add a note if the item is not applicable and please add your handle next to the items reviewed if you only reviewed selected items.* I have... - [ ] confirmed the correct [type prefix](https://github.com/commitizen/conventional-commit-types/blob/v3.0.0/index.json) in the PR title - [ ] confirmed `!` in the type prefix if API or client breaking change - [ ] confirmed all author checklist items have been addressed - [ ] reviewed state machine logic - [ ] reviewed API design and naming - [ ] reviewed documentation is accurate - [ ] reviewed tests and test coverage - [ ] manually tested (if applicable)
Just heard back from the cockroach maintainers that they are maintaining apd and they just pushed a new release with performance improvements: cockroachdb/apd#100 (comment). @ValarDragon would be great to get your assessment of which of these libraries we want to adopt. |
Did we get this on the roadmap yet? |
I'll put an issue on the Framework WG board. |
Closing this in favor of #11783 as I think we are sufficiently aligned on a decision at this point. |
Summary
sdk.Dec
gets some numerical calculations wrong (always): #7640. We should have a correct approach to numerical computation in the SDK.Problem Definition
sdk.Dec
was created as a custom "big decimal" type in #1807. It appears that the motivation was primarily performance. Currently,sdk.Dec
is getting certain calculations always wrong. For instance it never seems to be able to correctly compute that(x / y) * y == x
(#7640, #7772 ). It isn't based off of any clear, well-researched specification as far as I can tell. While I applaud the efforts of the authors at making a custom decimal arithmetic implementation and find the algorithm relatively concise and elegant, it's hard to know all the details that have been figured out by IEEE, IBM, etc. without a serious research project.Proposal
sdk.Rat
is probably a good alternative and isn't actually slower in these albeit very limited tests 🤷 : tally calculation has precision issue #7640 (comment)big.Rat
orbig.Float
or something like https://github.com/cockroachdb/apdX
with precisionY
in orderA*B/A
vsA/B*A
, developers need to know that.For Admin Use
The text was updated successfully, but these errors were encountered: