Go: proposal: math/decimal: add decimal128 implementation

Created on 26 Aug 2015  Â·  36Comments  Â·  Source: golang/go

Preamble

In financial computation, base-10 computation is mandatory.
Discussion can be found in the proposal proposal: math/big: Decimal #12127 for big.Decimal.

A numeric type in the standard library usable for financial and commercial applications would be a big plus.

Existing solutions

For the moment, there exist 14 package that implement base-10 numbers.
https://godoc.org/?q=decimal

They all seem experimental, except https://github.com/shopspring/decimal which is used in production.

Its Decimal type is defined as:

type Decimal struct {
    value *big.Int
    exp int32
}

So, it is a arbitrary-precision type. But this proposal is about fixed-size decimal type.

Need to retrieve currency values stored as decimal in SQL databases

Base-10 numbers are primarily used for financial computation, for dealing with currency values.

These values are already stored as base-10 fixed-point datatypes (NUMERIC, DECIMAL, MONEY) in SQL databases (MySQL, MS SQL Server, Oracle, etc), but working with them in Go is not so easy.

Even if a big.Decimal type would eventually be added, the API of package math/big is more complex that what is needed for most financial or commercial applications.

A simpler data type, fixed-size 128bits floating-point decimal would be an optimal trade-off.

In the field of decimal data type, the work of Mike Cowlishaw is paramount:
https://en.wikipedia.org/wiki/Mike_Cowlishaw
http://speleotrove.com/decimal/

Note that the Java implementation of java.math.BigDecimal is based on the work of M. Cowlishaw:
http://www.drdobbs.com/jvm/fixed-floating-and-exact-computation-wit/184405721

Financial and commercial applications' requirements

The usual range for monetary values can be estimated as follows.
Assets under management for a very large multinational insurance company is:

$265,507,000,000

This figure is the sum of amounts that can have 8 digits after decimal point, because this precision is needed when working with foreign currency conversion.

This means that for a large company, a decimal data type must cope with figures like:

100,000,000,000.00000000, that is, 20 significant digits.

A decimal128 fixed-size data type provides 34 significant digits, which can easily store these figures.
We can even store USA national debt of $18,000,000,000,000 in it with full precision.

When doing financial computation, intermediate calculation can require some more digits, and 34 significant digits of decimal128 can easily provide them for most real cases of financial computation.

Smaller datatype like decimal 64bits only has 16 significant digits, which is not enough for financial computation.
It is always possible to save them with the proper precision in SQL database tables, to spare storage, but to make arithmetic on them, it is safer to always use a decimal 128bits.

Package for prototyping

So far, there exists no implementation of fixed-size decimal type, so I wrote this package to experiment with it.
It is a thin wrapper around the decNumber package of Mike Cowlishaw written in C.
Only the decQuad type has been implemented, as it is a 128bits fixed-size data type.

decnum.Quad is a floating-point base-10 number with 34 significant digits, and size is 128 bits.
Working with them is almost as easy as working with float64:
- decnum.Quad is a value, and not a pointer.
- all arithmetic is done on values, not pointers.
- computation errors are tracked by denum.Context.

https://github.com/rin01/decnum
https://godoc.org/github.com/rin01/decnum

This package tries to demonstrate that decnum.Quad is easy to work with, and is a proposal for the base of an API.
A complete package would implement all functions described in this document:
http://speleotrove.com/decimal/dnfloat.html
The only difference is that values are passed by value instead of pointers.

Proposal

Most helpful comment

Because decimal is a fundamental data type for everyday work, everytime you make an application that deals with money. All SQL databases implement such type natively for a reason, and it is quite unfortunate that there is no easy data type in Go in which we can store them.

If this type is in a third-party package, there can be a multiplication of choices, as is already the case for arbitrary-precision decimal, most of which are unfortunately experimental.
If it is in standard library, there will be no choosing, just one reference package with one API, and users and other packages can rely on it and be confident it works well.

All 36 comments

Why does this need to be in the standard library?

I think it makes a perfectly fine go-gettable package.

Because decimal is a fundamental data type for everyday work, everytime you make an application that deals with money. All SQL databases implement such type natively for a reason, and it is quite unfortunate that there is no easy data type in Go in which we can store them.

If this type is in a third-party package, there can be a multiplication of choices, as is already the case for arbitrary-precision decimal, most of which are unfortunately experimental.
If it is in standard library, there will be no choosing, just one reference package with one API, and users and other packages can rely on it and be confident it works well.

There are certainly advantages to having such a package in the standard library, but there are also drawbacks. Regardless, to put a new package in the standard library now we would want to first produce a complete implementation outside the standard library. Once we put something in the standard library we must support it for the lifetime of Go 1.x, so we need to get it right.

The reason for fixed precision decimal type seems to be that you want to
pass it by value, but different projects might have different requirement on
precision, thus I think if the package is to be generally useful, the
precision
must be configurable, but that would conflict with the desire to pass
decimals
by value.

I still think it's best to live outside of the standard library and
available as a
go generate template package so that people can generate decimal packages
of different sizes and include in their projects.

And in that case, it's probably best to be branded the package as an
arbitrary but fixed precision fixed-point package. If you think outside the
field of financial computations, fixed-point types of various sizes are all
useful (e.g. in DSP applications).

The main reason to pass by value is simplicity.

In my opinion, a decimal128 type is as fundamental as float64, and we should be able to work with the former as easily as with the latter, in similar way.
And when you declare a float64, it is a value, not a pointer.

People are happy with float64, and there is not much discussion about its 64bits fixed-sizeness or 15 significant digits precision.
It is just that float64is _"good enough"_ for a lot of applications, and really simple to use.
For more stringent requirements, it is now possible to use big.Float instead of float64, at the price of some more complexity.

The purpose of this proposal is essentially to have a fixed-size decimal128 type in the language, that is _"good enough"_, like float64is.
And if this default is not enough, people can use a more specialized arbitrary-precision decimal package, as discussed in [proposal: math/big: Decimal #12127].

That being said, there is no hurry to include decimal128 in Go.
I just think that this issue is a good place to receive suggestions on this topic, and see if there is sufficient support and need for such data type...

If Go can become viable or even good language for business applications if we had a good standard solution for fixed-size decimal numbers, we should go forward with it. It's a chicken-and-egg problem, but we can lay the egg without the chicken, and see if it hatches. I'm all for a good Decimal implementation, but as @adg already said, let's prototype it externally, as part of the proposal.

Issue #12127 is already discussing some of this, and I think the conclusion is that a Decimal128 type would be sufficient for all practical commercial computations.

[As an aside: I can also appreciate that one would want to have a built-in type decimal128. But that is something that should be considered only after we have gained an extremely good understanding of what decimal128 actually is. For instance, it would require (in my mind) that all the essential operations on decimal128 could be reduced to the basic arithmetic operations, comparisons, and likely some "helper" package, say decmath (like we have math now supporting float64). Most of the time, I would expect a user to be able to work with decimal128 without the need to resort to decmath. Is that possible? I don't know. It depends on the properties of decimal128; e.g., is it a decimal floating-point or fixed point format. If it's the latter, I suspect there's additional functionality needed to deal with precision etc. There's likely many other issues I am blissfully unaware of. ]

It looks like http://www.dec.usc.es/arith16/papers/paper-107.pdf might be a good starting point for a concrete design.

The package decnum now contains the functions needed for basic work with money.

https://github.com/rin01/decnum
https://godoc.org/github.com/rin01/decnum

The API seems good to me, and easy to use. If you want to play with it and tell me your opinion.

I've had a look at various implementations of 128bits decimal floating point.
It is really a huge work to write such code, and I wonder if it is a good idea to duplicate all this effort in Go.
The simplest solution may be to just take the C decNumber library of Mike Cowlishaw, which is very much free of bugs, and make a wrapper around it, like decnum package, after all.
And it is already compliant with all the standards.

Besides, most financial applications works heavily with databases, reading records, working on them, writing back the result to disk. So, the overhead of cgo calls is negligible, compared to IO operations.
And most applications that deals with money, like webstores or reservation sites, don't make tons of calculations.
They just want to easily read money values from databases and work on them.

@griesemer ultimately, there should be no problem to include a built-in decimal128 type in Go.
It should run with a fixed rounding mode, ROUND_HALF_EVEN.
Also, a decmath package will provide functions such as
FromString(string) (decimal128, error)
which must return error, because there is no Context which can track errors.
For arithmetic operations like +, -, etc, the result will be NaN in case of error.

gcc already includes Mike Cowlishaw's library, and proposes a built-in floating-point decimal type:
https://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html

That API (https://godoc.org/github.com/rin01/decnum) is pretty gross. cgo and C style has polluted it everywhere. Is there a pure Go version or at least one with Go style available?

That's right, I just changed it now. It should be a little bit better.

I don't like the API that every function has to carry a Context around.
I think the API of math/big.Float better where the rounding mode and
etc are bundled with the Float itself.

Well, the purpose of this decimal 128bits type is to have a type as easy to work with as possible.
So, it is inherently subject to some trade-offs.

In particular, precision is fixed at 34 significant digits and cannot be changed.

Context is primarily used to track errors that occurs during operations.
There are often long series of operations, and it is good to avoid cluttering the code with error detection lines.
It is better to perform all operations and check for error in the end, with context.Error() method.
That's why this Contextobject must be passed to all operations.

The big.Float methods raise a panic(ErrNaN) in case of error.
So the alternative for decimal 128bits package API would be to panic the same way ?...

Besides, decimal 128bits is passed by value, like float64, and methods return the result as value.

If the API of this package is changed to embed rounding information in the type, and then passing it by pointer, it will become the big.Decimal package discussed in https://github.com/golang/go/issues/12127.
But this proposal is to have something a little bit "lighter", simpler, good-enough for most users...

In Python, Context is passed invisibly but implicitly.
https://www.python.org/dev/peps/pep-0327/#use-of-context

In Java BigDecimal, a Context or RoundingMode is passed as argument to methods.
http://docs.oracle.com/javase/1.5.0/docs/api/java/math/BigDecimal.html

In C decNumber, Context is passed explicitly to each function.
Context is the argument called set:
http://speleotrove.com/decimal/dnfloat.html

@minux I have thought about that context problem again.
You are right, it is not good to have this context everywhere.
It would be good if the expression:

r = (a+b)/(c-d)

could be expressed as:

r = a.Add(b).Div(c.Add(d))

if err = r.Error(); err != nil {
    log.Fatal(err)
}

I will rework the whole API, in the following weeks.
Thank you for the comment.

(Again, ignore me closing the issue accidentally. That button is just misplaced.)

A comment about the context: I agree with @minux that it's nicer to have the context information in the values. But that comes at some cost: More memory per value, and a question of what to do when two values interoperate (e.g., a + b) with different context information. The reason I decided to go that route in big.Float is that the extra data per Float object is relatively small compared to the (presumably) large Float object. The other reason is that it was possible to "merge" the context information (which is just the precision in this case, by choosing the larger one).

Just something to keep in mind.

@griesemer presently, the context only contains two fields;

  • the rounding mode
  • the status

The status will be _incorporated as a field in the number struct_, which size will become 128bits + 8bits = 136bits.

type Quad struct {
    val    C.decQuad    // val is 128 bits floating-point value
    status C.uint8_t    // all errors that occured during generation of val
}

When an operation fails, it will set the proper error status flags in this field, where it can be checked.

The rounding mode for most operations, like Add, Multiply, etc, can be the always ROUND_HALF_EVEN, as rounding mode has no effect except for pathological cases.
For operations like RoundWithMode, and explicit rounding argument is passed.

The "merging" of context, that is, the status field, happens like this:

r = a.Add(b)

r.status will be set to a.status | b.status | operation status

Philosophically, the status contains information like Invalid operation, Division impossible, etc, which are meta-data to the value itself.
A NaN or another value has no sense if we don't know how it has been generated, and if errors occurred during its generation.
That's why the status field should be an inherent part of the number.

When I will have made another prototype package, things will become clearer.

A somewhat loose question/thought: are there any known/important uses for this other than money/financial? If not, would it make sense to keep currency code (possibly custom 3-letter string, with suggestion to use ISO 4217, but still allowing e.g. NUCs) encoded in each such number value, and disallow operations on Decimals of differing currencies? I see various pros and cons; not sure if this is good idea, but maybe worth considering.

Other uses? Possibly, but I wouldn't want to derail the interface with too many knobs. The two that come to mind immediately are (1) more exotic financial calculations, e.g., futures which might yield units like dollars-per-second or dollars-per-second-squared (e.g., derivatives etc) and (2) measurement units in general. I think (2) is not a good idea partly because it's easy to construct dimensional castles in the sky that would bog down the currency use case, and because 34 decimal digits is not enough dynamic range -- measure the universe in Plancks, or weigh our galaxy in hydrogen atoms.

It might make sense to contemplate dimensioned numbers built by combining decimal numbers with units, and let numbers just be numbers.

I would like to see a decimal type included in the std lib.

In my line of business application that is written in go, currently most of my money (decimal) calculations are done in the database so I can mostly avoid, and reporting doesn't tend to manipulate amounts too much. Right now I I'm using *big.Rat, but, it would be painful if it was more complicated in Go. If we get a go ahead to implement an accounts receivable module, I may need to revisit my types. If I had a decimal type in the std lib, that would make my choice very easy and clear.

I'm hesitant to use a third party package simply due to trust (this is handling money) and verifying such a package is not my primary skill set.

I put the "math/decimal" prefix here as a placeholder, mainly so that in a sorted list of proposals, this one and #12127 are near each other.

Thank you for filing this proposal. I found some information online that made the case for decimal floating-point a bit clearer to me, and I want to summarize what helped me.

Mike Cowlishaw appears to be the driving force behind the adoption of decimal floating-point, and he has an extensive web site dedicated to the topic at speleotrove.com/decimal.

  • The first important fact is to remember that rounding twice necessarily introduces errors. If you are working in binary floating-point then each operation rounds to nearest binary float and then printing rounds to decimal, making it nearly impossible _not_ to round twice. If you're doing complex numerical calculations, there are already multiple rounding steps, so one more is not a big deal. But many basic financial calculations are not complex, so that in decimal they may be possible to do with no rounding or with just one rounding step, and in those common cases, the difference between binary and decimal manifests as apparently incorrect computations. Cowlishaw's usual example is that 5% tax on a $0.70 purchase computes in binary float64s as 0.73499999999999998667732370449812151491641998291015625 instead of 0735, so rounding to two decimal places with half-up or half-even incorrectly rounds down (playground). In many applications it's important that simple calculations like this match what humans do by hand, and the imprecision due to binary rounding introduces mismatches.
  • The second important fact is that (surprise!) decimal floating-point uses a non-normalized representation. 2.5 and 2.50 have different representations, and 1.13+1.37 = 2.50, not 2.5. In general if you're using a decimal floating-point type with storage for N digits and a computation produces a result with fewer than N digits, you know that computation was exact: it involved no implicit round-off whatsoever. For example, assuming for simplicity a 5-digit decimal type, 1.0/3 = 0.33333 but 1.0/5 = 0.2 and 1.00/5 = 0.20. In the last two cases, the results 0.2 and 0.20 (both not 0.20000) signals that the math that produced them was exact. Binary floating-point, being normalized, does not carry this information about number of digits needed for the result, and not many people would care about the number of binary digits anyway.

As I understand it, these are the two compelling reasons to have separate support for decimal floating-point. For more I suggest the entire speleotrove.com/decimal site, but especially the FAQ and the paper Decimal Floating-Point: Algorism for Computers

Now suppose we want to make decimal floating-point available to Go programmers. What do we do next? We need more experience with the implementation, API choices, and how many people can use it. I would not wrap Cowlishaw's code (both for licensing and for Go maintenance reasons) but I would certainly use his test suite, much as package regexp does not wrap RE2 but reuses its test suite.

It seems to me the thing to do is to get the developers interested in an implementation working together on a single implementation. It's fine with me if they use golang.org/x/exp/decimal for now, and when we think it is ready for general use we can move it out of exp.

Who is interested in working on this?

Re API:

After reading the various speleotrove documents, I think it probably makes sense to keep the rounding mode an explicit argument to the computations instead of stuffing it into the data type as was done with package big. In big.Float the footprint of a single number is large, so the extra word to hold rounding mode is not a significant overhead. If there is a Decimal64 or Decimal128 type, an extra word for rounding mode is a 25-100% overhead depending on the storage of the floating-point bits themselves. Also in big.Float the rounding mode is much less important: round to even is fine for the vast majority of users. In decimal it seems to be much more common to need to specify different rounding modes (for example half up vs half to even).

I think I would suggest starting with:

package decimal

type Mode struct { ... unexported ... }

type D128 struct { ... unexported ... } // size==16 bytes
func (d D128) Add(m Mode, x D128) D128
func (d D128) Sub(m Mode, x D128) D128
...

with the intention of adding D64 or D256 if needed later. The package could also be extended with type Big (but that's issue #12127).

The methods allow writing expressions that mimic infix notation, like x.Add(m, y.Mul(m, z)) instead of decimal.Add(m, x, decimal.Mul(m, y, z)) or m.Add(x, m.Mul(y, z))` (that last example assumes a different mode type for each size of number: Mode128, Mode64, and so on). Since Go is not Lisp, I think an infix API is clearer to Go programmers than a prefix API. And shorter.

I think it's probably a mistake to store a status word in the D128 itself; that would imply the same significant overhead as for storing the rounding mode there. I also think it's probably a mistake to make the mode (aka context) a mutable argument to the operations. That implies the need for a different Mode value for each goroutine doing computation. Many Go programmers will use a single Mode, either because they don't understand or they don't care about checking the bits, and the race detector will report problems.

If it is very important to detect, say, underflow during a computation, the mode can have a bit that says "panic on underflow".

Was curious why you start with Add/Sub etc taking an explicit mode, rather than having that be a variant (e.g., AddMode or AddM). I checked the two speleotrove/Cowlishaw references, and it's clear that alternate modes are more likely with decimal arithmetic, but I also saw reference to a "default" of nearest-even-if-tied.

I think an explicit mode for decimal arithmetic operations won't be often used. More likely an operation will be performed with the highest precision possible and then the result will be rounded/rescaled to contain a certain number of digits after the decimal point.

The package decnum has been heavily modified.

https://github.com/rin01/decnum
https://godoc.org/github.com/rin01/decnum

Now, the Quad type (decimal, 128 bits) also contains a 16 bits status field.
As a value is of no use if the error status is not checked beforehand, it seems natural to put the status stored along with the value.
After all, errors are values.

In particular, the SQL operator IN is interesting.
The SQL expression below is true, because of the 3rd argument, even if the 2nd argument is an error, That's why I think that the status field is really a part of the number.

PRINT 10 IN (4, 7/0, 10, 8)  // prints true

Also, there is no more need to have a Context object, because RoundHalfEven is the default rounding mode, and status is embedded in Quad type.

For Add, Sub, Mul, etc, I think there is no need to pass an explicit rounding mode, as these operations are carried out with the maximum precision. It seems to me the rounding mode won't change the result, except in very pathological cases.

Can you play a little bit with this package, to experiment with the API ?

Regarding the API, I would prefer to see the string parsing function decnum.FromString() match the rest of the Go API as decnum.Parse().

Also, what is the need for both QuadToString() and the more idiomatic String()? In fact, I would again rather a more idiomatic Format() function for control over the precision being output.

There are other bloating aspects I don't quite understand, like the need for Copy(). The implementation and documentation equate to "a = b" so this function feels very unnecessary. Additionally, a native decimal128 type would be expected to use the assignment operator.

I would argue that including a ToInt32/FromInt32 are also unnecessary. Generally, the standard library seems to prefer int64 and since converting to int32 is as simple as a type-cast I would request these functions be removed. I would also lump AppendQuad in with this list since append(slice, quad.Bytes()) is equally trivial.

While perhaps just being nit-picky, I would remove DecNumMacros and DecNumVersion. I don't think they have any practical use in a Go API and they clutter up the docs by being there, providing a distraction.

@rthornton128
Thank you very much for your comment.

The Copy function is not needed at all and should not be used.
I just though that the user would find the information about '=' operator more easily there.
The comment says that '=' should be used instead.
If this dummy Copy function was removed, this comment must be put somewhere else and would be more difficult to find.
But I agree this is debatable, and I will think about removing this dummy Copy.

For all the rest, I absolutely agree with you on everything.

The problem is that this package is just a thin wrapper around the C decNumber library.
But I understand now that this fact is too much visible to the user.

For instance, I agree that ToInt32/FromInt32 are unnecessary, but the underlying C decNumber package has functions that convert directly to/from int32, but not to/from int64.
I perform the conversion to/from int64 by making an intermediate conversion to string, which is less efficient.
So, it is better to use ToInt32/FromInt32 if possible.
I have added some comment to emphatize this.

DecNumMacros, DecNumVersion, QuadToString are needed for those who want to experiment with the underlying C decNumber library.
In particular, QuadToString calls directly the C function QuadToString of the original decNumber library, and I wanted to provide this original function.
But as I don't like it, I have written AppendQuad and String (which just calls AppendQuad and converts the result to string) to have the number formatted like I want, with less use of the exponential notation.

I wanted to keep access to the API of the original C decNumber library as direct as possible, so that experimenting with it is more easy, and so that reading the original documentation is easier too.
That's why the functions in this Go package follows the same naming as in the original C decNumber library.

But there is really a need for a pure Go implementation, which API should integrate all your recommendation.

Our project cgrates is also very interested in this. Any concrete plans on this?

@rif, this bug is about figuring it out. The short answer is "no", not yet. But if a good plan & API is found, then it's more likely.

@rin01 What is the status here? As far as I can tell, https://github.com/rin01/decnum is not a pure Go implementation.

@griesemer Unfortunately, I am too busy with another project and didn't have the time to make a pure Go implementation of this package for the moment. It is on my todo list but I don't know when I can work on it.

We've been using shopspring's decimal package for a while, and it does everything we need it to in terms of accurate financial calculations so I can see why adding a built-in decimal type is not a priority. On the other hand though, and this could just be personal preference, but because it's a struct and not a value type like float64, trying to do any complex calculations with it just feels heavy and ugly.

I personally am glad Go does not allow operator overloading as it opens the language up to a whole batch of other issues, but considering how fundamental accurate decimal arithmetic is to some applications having a built-in decimal type that supports the standard operators would make for much cleaner and easier to read code.

For now we would like to see development continue outside the main repository, with the understanding that if more code starts to use decimal floats we can reconsider adding them somewhere more standard.

There's no need for an 'operator' notation - regular methods with operator
names would do if this is something we wanted to do in the first place. See
e.g., https://github.com/griesemer/dotGo2016 for a prototype implementation
of operator methods.

  • gri

On Wed, Mar 29, 2017 at 9:51 AM, maj-o notifications@github.com wrote:

Would'n it be better if operator overloading could be implemented this way
operator (a,b MyType) + (result MyType) {
result = a.Add(b) // just to show what it would do
}

This can be used for a native Go decimal type. This would be better, I
think. Because:

res := a + b * (c / d)

is much more readable then
res := a.Add(b.Mult(c.Div(d)))

decimal is hardly needed - but this has to be done like for default types

  • even stings can be added - why not decimals (or other types)?

Please, think about it.

Regards.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/12332#issuecomment-290151764, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AIIkT_WR4orzNeMtPDUrh7_VOajfZlJHks5rqowagaJpZM4FyMf3
.

Hello Robert,

thank You very much for Your fast replay.

I don't think so. There are two differences. (Excuse my bad English,
please).

First: 1 + 2 * 3 != 1+(2)*(3) (if You think of methods, this
would be 7 != 9)

Second: this is better then Add und Mult - YES - and YES, this has been my
best thought, but this is still not realy readable

We have a lot of currency calculations in our business code. Float does not
work, because of unhandable calculation errors. For complicated calculations
it is importent to be able to read them fast in a common sense of operators.

I didn't take a deaper look at Your aproach right now - may be I am
wrong, so
please excuse.
As I will have a bit of time, I'll take a deep look at Your proposal.
If this does not suit our requirements, I will search for a good
solution inside
of Go's sourcecode. Then it would be great, if I would be allowed to
share the
this with You.

Regards,
Andreas Matuschek (maj-o)

Am 03/29/17 um 19:07 schrieb Robert Griesemer:

There's no need for an 'operator' notation - regular methods with operator
names would do if this is something we wanted to do in the first
place. See
e.g., https://github.com/griesemer/dotGo2016 for a prototype
implementation
of operator methods.

  • gri

On Wed, Mar 29, 2017 at 9:51 AM, maj-o notifications@github.com wrote:

Would'n it be better if operator overloading could be implemented
this way
operator (a,b MyType) + (result MyType) {
result = a.Add(b) // just to show what it would do
}

This can be used for a native Go decimal type. This would be better, I
think. Because:

res := a + b * (c / d)

is much more readable then
res := a.Add(b.Mult(c.Div(d)))

decimal is hardly needed - but this has to be done like for default
types

  • even stings can be added - why not decimals (or other types)?

Please, think about it.

Regards.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/12332#issuecomment-290151764,
or mute
the thread

https://github.com/notifications/unsubscribe-auth/AIIkT_WR4orzNeMtPDUrh7_VOajfZlJHks5rqowagaJpZM4FyMf3
.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/12332#issuecomment-290156486, or
mute the thread
https://github.com/notifications/unsubscribe-auth/AE7A5siQbsnTpvLlB5PG55Dolloj6uoxks5rqo_FgaJpZM4FyMf3.

@maj-o Again, see https://github.com/griesemer/dotGo2016 . 1) The precedence of operators remains unchanged. 2) The whole point is that one can name a method "+" rather than "Add".

Time goes by. In march I had no idea and needed a solution for decimal calculation with humanly readable operators.
Great work Gantlemen! What I read above is much good work !
I hope that operator methods will win sometime, cause they make so many things better.

@maj-o this issue is closed and the proposal was declined. I am going to lock this issue to make it clearer that further conversation should occur elsewhere.

We don't the issue tracker to ask questions. Please see https://golang.org/wiki/Questions for good places to ask. Thanks.

Was this page helpful?
0 / 5 - 0 ratings