Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How to Handle Monetary Values in JavaScript (frontstuff.io)
133 points by fagnerbrack on Oct 30, 2018 | hide | past | favorite | 108 comments


Maybe it’s just that I’ve done a few gigs at financial institutions, but it’s pretty shocking to see the number of people in this thread arguing that it’s a-ok to use floating point for money. I don’t want to get dragged down into specifics, but if you’re doing this, please, please don’t. Money is one of those things like dates and times - everyone who hasn’t done much of it can’t figure out what all the fuss is about. Just... please, at a minimum, if you don’t want to be bothered, find a reputable third-party library with Money and Currency or some such, and use that. It really won’t add much work for you.


I'm curious to hear about your specifics, since I was at a shop where front office used all floats (doubles).

Not sure what the back office did, but I doubt it was all done in arbitrary position arithmetic.

Note that if an accountant says things need to be accurate down to the penny, that's probably an exaggeration [1]. Financial statements are typically rounded to the thousands, e.g. Intel's are rounded to million [2].

The IRS also lets you round off dollars [3].

I agree though that if you're building a large complex system with many coders, and either 1) nobody has a full view of the calculation and can do the numerical analysis or 2) there's a high chance that at some point in the future you need penny-accurate bookkeeping, then it's safer to use a very wide int or float type.

Of course, there's also the case where the accountant or auditor doesn't actually need penny precision, but it's easier to just tell the coder 'just in case'.

[1] https://www.accountingtools.com/articles/2017/5/14/the-mater...

[2] https://s21.q4cdn.com/600692695/files/doc_financials/2017/an...

[3] https://taxmap.irs.gov/taxmap/pub17/p17-006.htm#TXMP5d034bfb


If you bill customers for anything, and there are multiple items being totaled on an invoice, you do not want that total to be anything except an accurate sum of the individual parts.

At a minimum, you look like a rinky dink operation that doesn't know how to add dollars and cents. On the other end of the spectrum, some people will likely accuse you of theft, and not nicely.

Do not fall under the assumption that the amount you mis-bill will inform how upset the people are that contact you. You will likely be the easy and obvious scapegoat for all their recent annoyances. For example, imagine all the people upset at their AT&T, Comcast or Verizon bill, who now have an easy and obvious thing to point at as an example of over billing. Have fun with those transferred emotions...


I think that's the disconnect here — 'high' finance is not an exact science, and clients don't actually have an easy way of checking the numbers directly themselves — not only the models, but the inputs are also proprietary. Even auditors and regulators are dependent on bank models.

Note that the article's advice is to use their own library, but the library just uses JS floats as ints and a separate precision arg:

https://github.com/sarahdayan/dinero.js/blob/master/src/dine...

This doesn't give you any more precision than using floats directly, just a wider 'mantissa' range.


> This doesn't give you any more precision than using floats directly, just a wider 'mantissa' range.

I think this analysis misses a deep and important point. The library you linked uses decimal floating point (in software) instead of the binary floating point used by IEEE floats.

The point of this is not to increase overall precision. A double is already precise to 2e-14% of its value, which means it's capable of representing something like $1 Trillion before it will get off by a whole cent.

The point of using decimal floating point is that it can exactly represent the values we care about when dealing with currency. This allows us to have no error, instead of trying to keep our error small. It means we never have to round (unless performing division).

You could have 1000x the precision in binary floating point, and decimal floating point would still be better for money.


> This allows us to have no error

That's not true, because the software 'mantissa' in that library still only has 52 bits of precision (2e-14%).

E.g. if you try to add $100 trillion + $0.01 using dinero.js, you still get underflow and the $0.01 disappears.


I thought it was fairly obvious I wasn't talking about banking. I was providing an example of a much more common scenario, which many businesses have to deal with, that also provides a good reason for taking care with currency.

> This doesn't give you any more precision than using floats directly, just a wider 'mantissa' range.

Floating point errors sometimes result in slightly smaller values than expected, and this can result in problems when they are passed around if even one spot if missed that needs to round. For example, some databases might truncate 1.9999999999 to 1.99 instead of round to 2.00.

As a simple illustration, consider the following Javascript (because it's easily available):

    price1 = 0.10;
    price2 = 0.20;
    package1 = price1 + price2; // Hmm, 0.30000000000000004 on my system
    ten_packages = 10*package1
    money_received = 5.00;
    change_due = money_received - ten_packages;
    // change_due is 1.9999999999999996,should be 2.00
So, either you choose floating point, and make sure to always round prior to display or passing to any other system and if you miss a spot it will probably just work fine until you hit specific values, or you convert to cents, and you only have to worry about overflow, and any place you don't convert is obvious because it's not formatted with cents and is an order of magnitude too large. And if you're worried that an int is too small and might overflow, just use a long, and that problem goes away for all conceivable real values in your lifetime.

If you're coding defensively (and we're talking about money here, so why wouldn't you be?), one of them is obviously a better choice


> I thought it was fairly obvious I wasn't talking about banking

Right, but the article is talking about all use cases of currency... not all of which have customers that care about penny precision. I think we're in agreement that the business use case should be driving the technical requirements, and sometimes software engineers overestimate the precision actually necessary.

Overflow modes aren't always consistent between all your systems either, so I think in practice, fixnums also have issues and the advantage isn't totally clearcut.

> always round prior to display or passing to any other system

This rather depends on how much control you have over the other systems. Since, you're basically throwing away precision every time you round, it's preferable to use floats to pass data back and forth and only round before you display.

I mean in practice, you should be using formatted outputs (whether printf format strings or something fancier) to show monetary amounts, and those should be doing rounding automatically.


> not all of which have customers that care about penny precision.

No, I think all customers care about penny precision when you are talking about money they have or owe. Either the amounts in question are small enough that a penny isn't negligible, even if it's not important, or the amounts are large enough that an error like that really shouldn't exist, because they should be taking it very seriously.

People might be willing to let it go because it's a small amount, but they'll remember it.

> Overflow modes aren't always consistent between all your systems either, so I think in practice, fixnums also have issues and the advantage isn't totally clearcut.

They're consistent in that it's generally trivial to allocate enough bits to it that an overflow can't happen in any sane inputs. As opposed to floating point, where errors happen throughout the entire range of sane inputs.

> Since, you're basically throwing away precision every time you round, it's preferable to use floats to pass data back and forth and only round before you display.

No, you're generally not throwing out precision when you round in this case. You're throwing out error. If you apply a percentage to money, they you might be throwing out precision, and that's a case you should think about. But for currencies any time you round you are purely fixing floating point errors that comes naturally by mixing number bases in this way.

> t's preferable to use floats to pass data back and forth and only round before you display.

You are explicitly saying it's preferable to use a lossy encoding for pass data back and forth because as long as you remember to do so there's a fairly easy way to recover the loss. the question remains, why is that preferably to using a medium that isn't lossy for any sane input you could want to represent?


> No, I think all customers care about penny precision when you are talking about money they have or owe.

That wasn't my personal experience working on a fixed income desk. We regularly interacted with CFOs and corporate treasurers and the 'bills' were in the millions. And why would they? As they say, 'penny wise, pound foolish'.

> No, you're generally not throwing out precision when you round in this case. You're throwing out error.

That's not always true from a numerical analysis standpoint. By the way, I highly recommend this old but still useful doc which goes through the math carefully:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.h...

Rounding can introduce up to 0.5 ulp (unit in the last place) of error, where ulp here is the precision that you round to. This is pretty easy to show:

  let intSum = 0;
  let floatSum = 0;
  let roundedFloatSum = 0;
  for (let i = 0; i < 100; i++) {
    // Generate numbers up to 1000, with up to 3 decimals
    const randInt = Math.floor(Math.random() * 100 * 1000);
    const randFloat = randInt / 1000;

    intSum += randInt; // Sum in exact precision, scaled by 1000
    floatSum += randFloat; // Sum using full FP precision

    // Sum while rounding to 2 decimals along the way
		roundedFloatSum += randFloat;

    roundedFloatSum = Math.round(roundedFloatSum * 100) / 100;
  }
  intSum /= 1000;

  console.log(intSum, floatSum, roundedFloatSum);
  console.log(Math.abs(intSum - floatSum), Math.abs(roundedFloatSum - intSum))
Here we're generating some 3-fig decimal numbers and summing them up, once in exact precision, once with floats, and once with intermediate rounding to the 2nd decimal place.

On the last run, this outputs:

  5567.347 5567.347000000001 5567.39
  9.094947017729282e-13 0.0430000000005748
'Rounding as you go' for your intermediate results here introduced an unnecessary 0.043 of absolute error.

Every guide to numerical analysis I've ever seen recommends keeping all calculations in the same format (whether fixnums or floats) all the way through, then only doing rounding at the very end, for this very reason.

In fact, we can prove that on reasonable accounting inputs and simple calculations, doing everything using doubles and then rounding at the very end gives you the _exact_ result.

Let's assume the numbers you're summing, multiplying, etc. remain bounded under <$1 mm, and you're manipulating at most 1 million of those numbers.

1. Doubles have 52-bits, or about 15 decimal digits of precision

2. Basic arithmetic operations (addition, subtraction, multiplication, division) introduce at most 0.5 ulp of error per operation. Using our assumptions, each decimal number is accurate to at least the billionth (9th digit) place, whereas we only need accuracy to the hundredths.

3. The cumulative error of a chain of a million operations, each with an error in the 9th digit place, can at most only affect the 3rd decimal digit. The 2nd decimal will always be correct

> No, you're generally not throwing out precision when you round in this case. You're throwing out error.

Floating point calculations round to the available precision after every operation. If introducing extraneous rounding to a much lower precision magically 'fixed' errors, then how could long floating point calculations themselves be inaccurate? In fact, there are algorithms that lower overall error by carefully shepherding the low-order digits, e.g. https://en.wikipedia.org/wiki/Kahan_summation_algorithm

Rounding is often appropriate if the rounded value is the actual source of truth, e.g. if, after some long sequence of calculations, you've told the customer that they have $10.15 in the bank, then you should try to store that value rather than the raw float result. Even that's pretty subtle though -- e.g. if their account balance is a result of interest payments, one can show that you will introduce more error in the total interest paid over time if you discard lower order digits rather than reusing them for the next interest calculation.

One last thing: When you hand your exact precision results to an accountant, auditor, or customer, Excel's a pretty common tool that they use for basically everything right? It must be sporting some fancy arbitrary precision or decimal machinery under the hood, right?

Nope, just floats all the way down! https://en.wikipedia.org/wiki/Numeric_precision_in_Microsoft...

I think you should be strongly questioning your assumption that floats aren't 'good enough' if the very first thing every customer you interact with does is cast your results into a floats to do their own calcs.


> > No, you're generally not throwing out precision when you round in this case. You're throwing out error.

> That's not always true from a numerical analysis standpoint.

This isn't about numerical analysis, which is what you seem to not be getting. It's about the medium have specific attributes, and floats being incapable of perfectly representing the value. With respect to the currency being tracked, any difference smaller than one hundredth of a full unit (depending on currency) that results from simple addition or subtraction of accurate values is an error because it's not possible in reality.

> 3. The cumulative error of a chain of a million operations, each with an error in the 9th digit place, can at most only affect the 3rd decimal digit. The 2nd decimal will always be correct

Cumulative error is irrelevant. It only takes a single error that causes a value to be less than the correct value by a very small amount and then if there's any place where it exits the system without rounding, that error may be increased to a full minimum difference of the medium ($0.01 in this case).

> If introducing extraneous rounding to a much lower precision magically 'fixed' errors, then how could long floating point calculations themselves be inaccurate?

By nature of the actual thing being represented by a the floating point value. There is no point talking about floating point without talking about what it's representing. In this case, it's currency, which has very specific characteristics.

You can argue that a sphere is the best container shape because it maximizes volume to surface area all you like, that doesn't mean it's the best container shape when what you are storing is shoe boxes.

> Rounding is often appropriate if the rounded value is the actual source of truth

Yes, nobody is arguing that you shouldn't round floats in this case. I'm arguing you shouldn't use floats so you don't have to round at all.

> Excel's a pretty common tool that they use for basically everything right? It must be sporting some fancy arbitrary precision or decimal machinery under the hood, right? ... Nope, just floats all the way down!

Excel is a system to represent values of all types. Their requirements necessitate an amount of flexibility that makes Floats a good choice. Even then, they will be handling all the rounding and representation automatically for you. When you use excel, you aren't using floats, you're using an excel numeric type which is implemented underneath using floats and some very specific behavior. The fact that it automatically deals with floating point errors and rounding is what makes it not a float.

In the case we are discussing, the same requirements are not present. We don't have to worry about representing any conceivable value, just the ones allowed by the type. A float provides more flexibility than we need, and at the cost of error that needs to be cleaned up at all the edges of the system.

Again, I ask you, why should we choose an underlying type that doesn't fit the needs as well as another option? In all aspects, integer containers either have a less problematic error case or do not suffer the same problem.


> This isn't about numerical analysis, which is what you seem to not be getting.

Of course it is. The point of the computation is to get the correct result. Numerical analysis hints at whether your computation will give you the right answer or not.

You're asserting that it's trivial to make sure that your fixnums won't overflow, but in most normal cases where you'll fit into a 64-bit int range, you'll also fit into the 52-bit precision range in a double. If you can prove that ints are OK, then you can just as easily prove that doubles are safe to use.

And unlike floating point rounding errors, where the answers often have some hint about the correct result, undetected overflows silently give you a totally garbage result.

Your system does not magically become 'safe by design' simply because you use ints everywhere, you still need to put in the extra work to make sure your numeric range is always sufficient—at which point you're doing numerical analysis, whether you choose to call it that or not.

Honestly, I'm surprised that you will denounce the usage of floating point numbers, repeatedly make untrue claims about FP calculations, then wave off the entire subbranch of CS that studies how numbers are represented on computers and how those calculations can go wrong as being entirely irrelevant.

> It only takes a single error that causes a value to be less than the correct value by a very small amount and then if there's any place where it exits the system without rounding, that error may be increased to a full minimum difference of the medium ($0.01 in this case).

What? The only case where a number rounds wrongly is when the final value is more than half a cent away from the correct value.

If you can't be bothered to understand how FP calculation works, at least please stop making demonstrably false claims and misleading others.

> By nature of the actual thing being represented by a the floating point value.

As soon as you actually do any 'complicated' math (take a square root or a log), your result is no longer exactly representable, because your results aren't decimals or even rationals, but irrationals.

Even an arbitrary precision number type doesn't help you here—your only solution is an arbitrary precision math package—but that isn't what you're suggesting

It sounds like you think using fixnums everywhere guarantees you the Platonic exact answer, but they can't actually do that:

- If you're doing anything complicated, you will unavoidably be making approximations and rounding. Adding a bunch of extra casts back to fixnums don't allow you to avoid approximation, just a false sense of security.

- OTOH, if you're not doing anything complicated, just totaling small numbers multiplied by round constants, then doubles are provably sufficient at getting an exact answer after rounding.

> When you use excel, you aren't using floats, you're using an excel numeric type which is implemented underneath using floats and some very specific behavior. The fact that it automatically deals with floating point errors and rounding is what makes it not a float.

You're contradicting yourself here. All Excel does is 1) compute using floats (doubles) everywhere and 2) round the result before showing it.

That's exactly what I'm advocating. Whether you choose to wrap it into a special Numeric or Money class doesn't change your answer.

Look, here's C#'s decimal class: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

Must use some fancy arbitrary precision arithmetic right? No, it's just a floating point number—just a particularly wide (128-bit) type.


> Of course it is. The point of the computation is to get the correct result. Numerical analysis hints at whether your computation will give you the right answer or not.

In this case, we're comparing a representation which is exact, compared to one which is approximate. The numerical analysis which you are quoting tells you how close (if not exact) your (approximate* representation is. It's irrelevant when compared to exact, because as long as the other benefits you gain (much larger and smaller numbers) aren't needed, that's all downside.

> Your system does not magically become 'safe by design' simply because you use ints everywhere, you still need to put in the extra work to make sure your numeric range is always sufficient—at which point you're doing numerical analysis, whether you choose to call it that or not.

You have to do that work with floating point numbers as well, you just also have to make sure there's not error that needs to be rounded. There is not advantage to floats here, but there is one less thing to worry about with integer values.

> repeatedly make untrue claims about FP calculations

I have made no untrue claims that I'm aware, and I haven't noted you pointing out a specific claim as untrue with evidence that I didn't later point out that you were misinterpreting me on. Please feel free to provide evidence though.

> then wave off the entire subbranch of CS that studies how numbers are represented on computers and how those calculations can go wrong as being entirely irrelevant.

As noted above, it's irrelevant when compared to a representation with no error of the same type. Please stop inflating my assertions to cover more than they directly claimed. This is a straw man argument, please stop.

> As soon as you actually do any 'complicated' math (take a square root or a log), your result is no longer exactly representable, because your results aren't decimals or even rationals, but irrationals.

Care must always be taken with moving between a real currency amount and an interim amount. It makes sense to switch representations at that point, and deal with the amount with a representation that is appropriate for partial values (floating point may be appropriate here). At the point it's stored again, it should be converted to an exact representation again. Partial cents are not valid amounts of currency to have, so it makes sense to deal with the difference before it is represented as a currency again.

> You're contradicting yourself here. All Excel does is 1) compute using floats (doubles) everywhere and 2) round the result before showing it.

I'm saying that Excel isn't only interested in representing currency. If they were, they may have chosen a different representation. Since their constraints are that they also need to be able to accurately represent 0.00001, they are going to choose the best hardware supported representation that support all their needs. That ends up being floating point. For a currency, quite a few of those requirements are no longer needed, so a different trade off is possible.

> That's exactly what I'm advocating. Whether you choose to wrap it into a special Numeric or Money class doesn't change your answer.

> Look, here's C#'s decimal class: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe....

> Must use some fancy arbitrary precision arithmetic right? No, it's just a floating point number—just a particularly wide (128-bit) type.

That's not all it is. It's decimal floating point, not binary floating point, meaning it can exactly represent all the base 10 values in it's range. That is fundamentally different than binary floating point, and the fact they chose this representation is what "makes it appropriate for financial and monetary calculations" in their words, illustrates my point.


> At the point it's stored again, it should be converted to an exact representation again. Partial cents are not valid amounts of currency to have, so it makes sense to deal with the difference before it is represented as a currency again.

Exact decimal representation doesn't prevent rounding errors from accumulating over a chain of computations because oftentimes the correct value cannot be represented as an exact decimal. Forcing it by repeatedly rounding to the nearest currency unit, which is what you're proposing above, compounds the problem by needlessly throwing away available precision.

Simple example is paying interest on a bank account. If you insist on only storing account balances as an exact 2-digit decimal number, then any compound interest computation will quickly accumulate large errors, e.g.:

  // $100 in 'exact' integer form
  let float_balance = 10000
  let rounded_balance = 10000

  const daily_interest = 0.10 / 365 // 10% rate, compounded daily

  // Pay interest for 5 years
  for (let i = 0; i < 365 * 5; i++) {
    float_balance *= (1 + daily_interest)
    // This number goes into my DB, so I always round to the nearest penny!!!
    rounded_balance = Math.round(rounded_balance * (1 + daily_interest))
  }

  console.log(Math.round(rounded_balance) / 100) // 163.74
  console.log(Math.round(float_balance) / 100) // 164.86

  console.log('missing interest: $' +  
  Math.round(float_balance - rounded_balance) / 100)
  // missing interest: $1.12  
'Exact representation' does not ensure that your computation remains free from roundoff error, and sometimes makes it much worse. If you really insist on 'only storing as many digits as there exist in the currency' in your DB, then you will systematically underpay interest to all your customers... in this example, you'll actually start shortchanging your customer in just a few weeks.

It's your extra forced rounding to pennies that introduces the error: in reality, you really do owe fractional cents before they tick over into the next penny, even if you don't show it in the bank statement. The easiest way to solve the issue here is to store the account balances as floats, which keep track of the 'fractional pennies' which you claim are meaningless. Or forget about floats -- just use fixnums, but keep are the lower digits rather than discarding them upon storage.

By the way, this is the whole point of numerical analysis — you cannot blindly assert that 'if my numerical representation is exact, then my computation will always be correct / minimize total errors'. Yes, the logic is simple and seductive. The conclusion is also completely, utterly, stupendously, insidiously wrong. I hope you don't actually work in a financial context, because this will bite you in the ass someday if you lean hard on it.

I don't really like to appeal to authority, but it was literally my day job for a few years to make sure the numbers were right. Half my team were physics PhDs, and the rest were usually math PhDs or MFEs. We didn't use floats because we were sloppy, we used them because they typically were the most reliable way to get the most accurate result, in a domain where most numbers represented currency or money.

Only when you are actually sending or receiving money from the outside world (e.g. on an invoice, wire transfer, or transaction) should you rounding to the nearest decimal, and at that point, it might be appropriate to use a decimal or fixnum type to exactly represent the monetary value that was actually moved, but again this is typically overkill since you can always just store the rounded amount as a float. This is safe because rounding is idempotent: round(round(float)) = round(float)

But internal monetary calculations are best kept in the same type with enough extra precision to do computations, and floats are often the correct choice here, with no forced intermediate rounding. If you need 2 accurate decimal digits and you insist on using fixnums rather than floats, then you should be passing around at least micros ($1e-6).


I think you are right. I also worked on a bill-tracking software and one of the first things we noticed is that each company has their own way of rounding totals. There is no one correct way. For example one company rounds each line to three digits internally for the total but prints only two digits on each line.

For my software I started using decimaljs for the highest precision but in the end it never made any difference cent wise even when the payments go in the millions.


I've worked with a bank at a fund management application and money is annoying in surprising ways.

Some background: funds can handle fractional quotes and each fund can have its own rounding rules, funds mixes can handle fractional percentages, when money enters or leaves the system it gets rounded to cents because you can't move fraction of cents (unless digital)

So, you get issues even using arbitrary precision math.

Operation order matters. Rounding impacts everywhere so you need to be consistent across the business logic, reversing an operation need to have rounding applied in a specific order to result in the final date to match the initial.

Order is also important when splitting money, because the last share gets all the leftover cents that come from rounding the previous shares split so that the total gets allocated fully. I.e. moving the last 1$ from one fund mix to another mix that has 3 equally divided fund is going to round as .33 movement toward each, but you can't just leave that cent on the previous fund mix and you can't just drop it going nobody notices, so one of them gets .34


Collecting the rounding leftovers into another bank account was the plot of Office Space.


Also superman 3 15 years before...


I don't know why people wouldn't just make and array for monetary values or make two int values for decimal and number place.


finance is a different culture than hackernews, and the people speak different languages with different philosophy and assumptions about the world.

maybe this is not a technical issue, maybe its a subculture interaction issue that involves a lot of technical details.

i dont think its easy to 'understand' unless one group spends time with another, and i dont mean half an hour meeting, i mean like, shadowing someone for a week.


I really like the approach Perl 6 takes with FatRat[1]. You basically have an object which holds a numerator and denominator, so all arithmetic calculations do not loose precision.

[1] https://docs.perl6.org/type/FatRat


Rational numbers don't lose precision, but the compromise is that they can get arbitrarily large after a series of calculations, even if the calculations themselves involve small numbers.

Having access to rational numbers is great and there are times they are incredibly useful, but I don't think it's a good default.


For monetary values, though, the denominators should be the same.

This means that the denominator won't get arbitrarily large.

For serious uses of rationals you need to have control over when the top and bottom are divided by the GCD. For currency you want to keep the denominator at 100. 50/100 wants to remain as that and not get simplified to 1/2.


If you have a constant denominator, you don't need it. It's just a unit.


Ever seen the numeric tower of lisp? https://en.wikipedia.org/wiki/Numerical_tower

The link to the scheme documentation that goes over this is quite good.


That's fantastic! Do you have any more resources on this? I'd like to look at that concept in more depth.


Nothing specific. The scheme manual linked is a good one. As are the docs for Common Lisp. https://www.cs.cmu.edu/Groups/AI/html/cltl/clm/node16.html

Honestly, most all older manuals and documents that were not related to systems programming seem to cover numeric programming decently enough. I get the impression that languages used to worry more about that sort of thing, but then the truly heavy lifting of numeric programming moved into the specialized domains of Matlab and friends, and then programmers slowly unlearned most of what was learned regarding numbers.


Clojure does this out of the box: `(/ 1 3)` is 1/3. Removes a lot of annoying floating point errors.


Adding to the languages that do this, Factor does as well. "1 3 /" produces the rational value "1/3". https://docs.factorcode.org/content/article-rationals.html


Floating point arithmetics is not the only problem. See bankers rounding.


Yes, but bankers rounding should be the last step in calculation, not an intermediate.


Banking apps, e-commerce websites, stock exchange platforms, we interact with money daily. We also increasingly rely on technology to handle ours.

Yet, there’s no consensus around how to programmatically handle monetary values.

I wonder if the average person would find this frightening.


I don't think it's true that there's no consensus around how to programatically handle monetary values. Maybe there's no consensus around how to do it in JavaScript.

In the Java world, there's JSR-354: https://jcp.org/en/jsr/detail?id=354

Cobol has handled money well for decades.

.NET has Decimal, which should handle money correctly as long as you're working with values less than 79.2 octillion.

In my part of the world, at least, big banks tend to be mostly Java and Cobol on the back end. You'll definitely see Node, Go, and others being used for various tasks throughout the company, but when it comes to keeping track of money and moving it around, it's primarily Java and Cobol doing the heavy lifting.

That's what I've heard from other developers, anyway. I haven't worked at a bank myself.


Exactly correct. More stuff than most people expect is still handled by COBOL. I haven't worked with a bank in a few years, but back in 2015 the one I was at was attempting to migrate a lot of the COBOL tasks to Java.


IBM has had a solution for a long time: http://speleotrove.com/decimal/decarith.html

Python uses it: https://docs.python.org/3.5/library/decimal.html

Just because JS is stupid doesn't mean other languages are.


> Just because JS is stupid doesn't mean other languages are.

This bewilders me. They keep cramming so much new stuff into JS these days - arrow operators, promises, async/await, God knows what, but not ints? I'm not asking for the whole numeric tower of Lisp languages, I can live without rationals and bignums and complex numbers. But a God's own fixed-width integer. You'd think they'd add it in the years since JS started being used for serious things.


Douglas Crockford has talked about JavaScript's number handling, and it makes for a good listen. It's in https://www.youtube.com/watch?v=DxnYQRuLX7Q and there's a follow-up in https://www.youtube.com/watch?v=oYFU0gkAYdc


My goto for any kind of arbitrary precision math in JavaScript is https://github.com/MikeMcl/big.js

As others have noted, if you're doing any kind of math that involves floating point numbers and you will displaying them to a user, you'll almost certainly want something like this.

While it's not specifically a "Money" class it can form an excellent base for one.


Really excited to see what possibilities for number handling native BigInt in JavaScript will bring

https://developers.google.com/web/updates/2018/05/bigint


probably none. To handle currency correctly you have to handle rounding to the nearest penny correctly. Computers absolutely SUCK at this, in general. So you need a Decimal library, that can handle decimal numbers. Integers ARE NOT DECIMAL.

IBM has had a solution for a long time: http://speleotrove.com/decimal/

Python uses it: https://docs.python.org/3.5/library/decimal.html

Get a proper decimal library, and then you can devote brain cells to actual problems. The IBM link above includes links to libraries that do it right.


You can also just use the smallest unit and work in cents. The Stripe API does this instead of dealing with decimals.


I looked into using Dinero.js before but it was overkill for my use case of calculating cart totals in USD only. I went with Big.js because it was better suited for what I wanted: a small library for decimal arithmetic.


I think the advantage behind using something like Big.js is to future proof the code against hyper-inflation. Though a dedicated currency engine would better have the ability to set and conveniently update the currency denomination of stored values to cope with this.

With float64's maximum 'safe' integer at almost ten thousand trillion, I believe this should be enough to compute most commercial transactions securely as long as the amounts base denomination is up to date (eg cents or dollars or megadollars).

The quantization noise mentioned in this article is possible to round off when appropriate.

  function round_tail_digits(c){ 
    return Math.round(c*1e15)/1e15
  }

  function round_at_hundreds(c){
    return Math.round(c*100)/100
  }


> I think the advantage behind using something like Big.js is to future proof the code against hyper-inflation.

This is probably over-engineering.

Currencies that go through hyperinflation typically end up being officially re-denominated many times, there's no 'zero maintenance' path that a library will give you.


This approach, at least in languages like C and Java, would effectively smush the effective MAX_INT down to MAX_INT/1e15 or whatever. No bueno.


It doesn't because basically - the point floats. The max_int doesn't mean the largest whole number that can be represented, it effectively means how much resolution you have on the number line in every one of a very wide range of scales. These languages all give exactly the same standard IEEE754 float math results for basic operations.


Can you illustrate a case where you can't use regular FP arithmetic and round to 2 decimal places at the end?


Bring up a Javascript interpreter and try this:

    $ node
    > 0.1 + 0.2 - 0.3
    5.551115123125783e-17
Close enough for trivial cases but errors can accumulate.

Ruby, Perl5, Python also fail this. Perl6 uses Rat numbers for things like this and doesn't have this problem.


You'd need to accumulate 10^14 such errors to be off by an entire penny, which doesn't seem relevant to a web shopping cart.


Addition and subtraction are usually fine as you point out. Multiplication and division can cause problems depending on the values and the number of times you do it. If you are doing more complicated financial calculations, you could get in trouble more quickly. If you are careful and know exactly what you are doing, then you can certainly get away with normal FP for many applications.

My experience has been, though, that many developers have a bit of a laissez-faire attitude towards these kinds of details (time is, of course, another area where precision is incredibly important in corner cases). Rather than expecting all future programmers to be as diligent as they need to be, it's probably easier to simply use tools that make it difficult to make mistakes. Every time I'm working on financial systems I build currency classes (usually I'm doing OOP). Usually people grumble and there will be people who ignore the currency classes. I've never had to wait long before someone trips over an edge cases and everybody understands the benefits.


> Ruby, Perl5, Python also fail this.

Ruby and Python have arbitrary precision decimal and rational support in the standard library. While there are arguments tobe had over whether using binary floats by default for simple decimal literals is a good choice, that's a different issue than not having any other kind of numbers available by default.


Sure, split an amount in 3 payments - take one dollar, put 0.333333 dollars once, 0.3333333 dollars the second time, and the remainder (1 - 0.33333333 - 0.33333333) as the outstanding balance doesn't match what you should have; you should have had 0.34 but you have 0.33 if you round only at the end.

Do the same with a proper decimal datatype, and the same operations give the correct result - pay out 1/3=0.33; pay out 1/3=0.33; the last payment is 1-0.33-0.33 = 0.34 exactly.

The same goes with all kinds of other issues e.g. totalling up tax or discount amounts (calculated as a percentage) where you'll get mismatches if rounding happens at the wrong time, etc.

E.g. ten separate receipts each selling a 0.10 item with a 9% tax or discount will result in each receipt having 0.01 tax or discount; so the proper total in this case is 1.00 sales with 0.10 tax or discount; different than the case if you sell ten such items in a single transaction. If you do all the calculations in floats, you get this wrong.


  (1.005).toFixed(2) // Expected rounding to "1.01"
  "1.00"


cases like that's everywhere.

2.875.toFixed(2) = "2.88" while 2.675.toFixed(2) = "2.67"

7205759403792794 + 1.1 = 7205759403792795

7205759403792794 + 1.9 = 7205759403792796


Sure. Divide by two with any kind of predictability.

for (let n = 0; n < 1; n += .01) { console.log(`n: ${n}, half: ${(n / 2).toFixed(2)}`); }


You can't model 1.0 correctly but it will be something like 1.000006667. If you add enough of these numbers you will be off.


Your details are off. 1.0 is precisely representable in double precision floating point with no error.


Sorry, picked the wrong number. But I think the general idea is correct.


I am pretty sure you were thinking of 0.1 which, indeed, requires an infinite number of digits to be represented in binary (like 1/3 in decimal).


Probably. I just remember that this tripped me up big time a long time ago.


Integers can be represented exactly in IEEE754.


Not all of them, e.g. 9007199254740993 can't be represented in a double precision format (what javascript uses)

For single precision floats you start having issues representing integers as low as 16777217


Sure, you can’t represent Graham’s number either...

I know you knew what I meant.


No not really. You have to define the bounds you were talking about.

People believing that a float type can represent all integers that the int type in their language can represent is not exactly uncommon.


How about 2.1?

According to this https://www.h-schmidt.net/FloatConverter/IEEE754.html

2.1 will be represented as 2.099999904632568359375


well, 2.1 isn't an integer.


I wish you could add different currencies together:

    Dinero({amount: 5000, currency: 'USD'})
      .add(Dinero({ amount: 1000, currency: 'EUR'}))
...and then convert that sum to a concrete currency later.

This feature request inspired by http://blog.ploeh.dk/2017/10/16/money-monoid/


How would you deal with exchange rate, which changes in real time?


In the lazy version i have suggested, the exchange rates are deferred until you “need to know” and the sum is converted into a concrete currency.

If you’re keeping these values around in memory for an extended period of time time and you care about rapid changes to exchange rates, then you have a complex domain and the deferred exchanges may or may not be a good fit, but at that point you need larger system design, not a JavaScript library.

Also in my dream money library, division and multiplication would be done with whole number fractions like

    new Dinero({amount: 1000, currency: ‘usd’})
    .multiply({numerator: 1, denominator: 3 })
which can similarly be deferred to prevent rounding errors in partial calculations.


But say the euro goes down 50% against the USD, then you are probably going to want differing amounts of those currencies right?

I think generally you 'need to know' immediately.


You need a timestamp and a timeseries database of exchange rates so that you can go back in time and reconstruct your calculations at a latter date as well.

Source: worked at a bank that implemented pretty much this.


To multiple different currencies at multiple valuation points. With optional exchange rate costs. It's a pain in the arse ...

Source: worked on multi-currency fund tracking software.


Agreed with both of you, and using an exchange rate at a specific point in time actually could make deferring that calculation even more of a sound idea, depending on the exact semantics of what you’re computing.

At larger scales even exchange rates are a leaky abstraction over a forex market, so there you go. No solution will work everywhere.

But if you’re entering the prices of last week’s coffee and hotel room into a web app to see how much you’ll be reimbursed, you don’t want to use a new exchange rate every time a form input changes.

Source: worked on multi-currency expense reporting software where everything is done in JavaScript floats.


We should all start a club defined by our forex market induced alcoholism.


The only rule is no splitting the bill


I’d rather keep the IO call to determine exchange rate far away from my arithmetic, personally.

This technique is fantastic for units that don’t change relations though, like mass and length.


From the docs

> You must provide your own API to retrieve exchange rates.

https://sarahdayan.github.io/dinero.js/module-Dinero.html#~c...


While reading, I thought "hang on, what about non-decimal currencies?" but it turns out that is no longer a concern for modern usages: https://en.wikipedia.org/wiki/Non-decimal_currency


You can handle non-decimal currencies by treating money as a fraction.

You can implement a correct set of fraction manipulation functions in well under 100 lines of code with a bit of algebra. It seems likely there are several well-tested high-performance fraction libraries for JavaScript. They should be tiny.

Some people in the UK apparently voted to leave the EU so that they could have a blue passport [0], perhaps there are some who also hope we return to imperial measurements and pounds, shillings and pence. I'm quite sure that even in that unlikely event, some programmers will still insist on using floating point numbers for currency.

[0] Although Malta has a blue passport and is in the EU. And the only reason the passports were blue in the first place was to meet an international standardization.


> Floats: 0.1 + 0.2 // returns 0.30000000000000004

The question is how much of an issue is this really? How often is 4 parts in 10 quadrillion a meaningful error when dealing with money? Especially when most of the time JS is dealing with money it will be presentation.


Because value !== 0.3 and that can cause unexpected problems on the edges (which are the worst kind of issues).


I'm far from wizard on the topic, but from my experience it's not that simple since the relative magnitudes of the operands and the operator affect the size of the error in non obvious ways.

I spent 13 years maintaining a 2mloc reservation and accounting system that used floats for everything. By the time I got there and understood enough it was simply too late to do anything about it.

Just say no, really; that road leads to madness.

If you need precision; and many do; consider fixnums, rationals or bignums, in order of increasing complexity and flexibility.


I might suggest reading https://doc.lagout.org/science/0_Computer%20Science/3_Theory...

This is in general how floating point math works.


This is actually the plot of a few movies, people have been convicted of it in real life. [1]

[1] https://en.wikipedia.org/wiki/Salami_slicing


I ran into this once because a price was slightly too low, and got truncated. A $19.99 item got displayed as $19.98. You can work around this by rounding everywhere but that’s error-prone.


How much of an issue is it?

The kind of issue do not want to have: issues with money that can go very wrong when calculating discounts, tax percentages, interest or whatever.


After doing a lot of calculations? This is a big problem.

The firm I worked at used built in Java libraries and always deferred rounding until the very last possible moment.


Yeah seriously. You'll have some front-end lib func converting that to 2 significant digits right of the decimal anyway. Weak argument imo haha.


Yet, there’s no consensus around how to programmatically handle monetary values.

Do what COBOL does and use it to verify your monetary library. Your code will agree with the majority of financial institutions.


I usually use cents as the stored amount, so no decimals. Is there something obviously wrong with this approach?

Ie. if the price is 150.95 I store 15095 in the db and do the calculations with cents, and then divide by 100 when presenting the values.

I have never come across fractional cents, maybe that is an issue in some scenarios (would be interested to know what those are... Even tax percentage calculations are rounded to the cent)


Handling currency in JavaScript just sounds like a bad idea. It's better to be using a language with a type system that has your back on this kind of thing, and preferably to deal with monetary values as rational numbers.

There's a good article on this subject here:

https://ren.zone/articles/safe-money


Equally important to choosing a valid numeric type to store the data is solid business logic to handle it.

In the contrived example of splitting a value of 999.99 among more than one payment, the correct solution is:

    * Determine the next payment based on the current balance.
    * Invoice/Apply/Etc that payment
    * Repeat until there is a balance of 0 (zero).


recording an incoming payment and applying that payment value to specific accounts is its own thing to be aware of. People want to see that they paid $200, even if it was $150 to one account and $50 to account.


This advice doesn't have anything to do with JavaScript until the author starts plugging their own library.


This seems like the basis for stripe handling amounts in cents to maintain integer values.


Many cryptocurrency libs like Web3 opt to use bignumber js for all money manipulation


Nice and simple writeup. Straight to the point without being ceremonious.


Don't.

For the love of God, just don't.


A while back I did some exhaustive testing of several ways to calculate sales tax on a sale in Python3 and JavaScript, testing the various "obvious" ways against all tax rates in increments of 0.0001 from 0.0000 to 1.0000 and all sale amount from 0.00 to something like 25.00.

It was interesting. Here are some simple cases to try. One or more of these break most of the seemingly obvious approaches:

   1% of $21.50
   3% of $21.50
   6% of $21.50
  10% of $21.15
Here are some examples of obvious looking but wrong approaches. These all return the tax in pennies, so the desired results are 22, 65, 129, and 212. In puts are floats, such as 0.01 for 1% and 21.50 for $21.50.

  # 1%, 10% wrong
  def tax_f1(amt, rate):
    tax = round(amt * rate,2)
    return round(tax * 100)

  # 3%, 10% wrong
  def tax_f2(amt, rate):
    return round(amt*rate*100)

  # 6% wrong
  def tax_f3(amt, rate):
    return round(amt*rate*100+.5)
Here are two that do work:

  def tax_f4(amt, rate):
    amt = round(amt * 100)
    rate = round(rate * 10000)
    tax = (amt * rate + 5000)//10000
    return tax

  def tax_f5(amt, rate):
    amt = int(amt * 100 + .5)
    rate = int(rate * 10000 + .5)
    tax = (amt * rate + 5000)//10000
    return tax
If instead of floating point input, you work with numbers already scaled up to integers (scaling the rates by 10^4 and the money by 10^2), this works:

  def tax(amt, rate):
    tax = (amt * rate + 5000)//10000
    return tax
The last three approaches, (tax, tax_f5, and tax_f4) also work well in PHP, Perl, and JavaScript.

In my code, I attach a suffix to rates and moneys that are scaled this way telling the scale factor, so I might have tax4 and cost2, meaning the tax rate is x10^4 and the cost is x10^2. I was disappointed to find that neither Perl nor Python would let my use Unicode subscript numbers in my variable names. :-(

I also did a series of exhaustive tests where I'd take every string of the form every string of the form digits dot digits with up to N digits after the dot, parse them with the languages most natural string to float converter, multiple that by 10^N and round to an integer, and verify that this was the "right" integer, and also verified that doing a floating point divide of that integer by 10^N and then using the most natural way in the language to turn that to a string with N digits after the dot gave the right result.

The idea here was to convince myself that I would not run into problems at the "convert to integer" or the "convert from integer" part. I did these tests in C, Perl, Python, and JavaScript I think. I'm having trouble finding my code now so I'm not sure if I tested in all of them.

Anyway, I concluded that it was safe. It's only when you start computing in float point that you have to worry.

Finally, in most of what I was doing at the time, I actually didn't ever need to convert back to a floating point format. All I was going to do with the final price, computed from cost + tax, was just display it to the user...and so if I converted to floating point I'd end up just sprintf'ing it (or equivalent) back to a string. All sprintf would really be being used for was to make sure it had the right padding and leading zeros and stuff like that.

So instead of going through floating point, I just did this (JavaScript):

  function cents_to_dollars(cents)
  {
    cents = cents.toString();
    while (cents.length < 3)
        cents = '0' + cents;
    return cents.substr(0,cents.length-2) + '.' + cents.slice(-2);
  }
(I assume that if I knew JavaScript I could do something much nicer than that while look for the padding).


why didn't you just use the decimal module[0]. It's been in Python since 2.4 and just handles penny problems for you, basically zero brain power required.

0: https://docs.python.org/2/library/decimal.html


> Using floats to store monetary values is a bad idea

This is waay overstating the case, it very much depends on whether getting the exact answer down to the penny matters to you.

Worked for a few years at a large investment bank, everything was done in floats because the modeling error of your derivatives pricers would be much larger than the roundoff error, but floats were much more convenient to develop with and faster since you avoided a bunch of casts.

Most of the time people think they need things down to the penny, they actually don't. An accountant does not care if you rounded off a few thousand dollars in your $10bn a year income statement.

A customer does not really care that you paid them $0.33 three times, rather than $0.33 twice and $0.34 on the third transaction (and fixnums don't even solve this 'extra precision' case without a bunch of extra work).


I really want to be generous here: I think your experience is very isolated. I have family that is executive level at a Wall Street bank, they would have you fired over the superfluous loss of a dollar, much less a few thousand dollars because that type of rounding error can end up in the millions over the course of a year if you deal 100+ million dollar transactions 5-6 times a day.

Here's how it would go:

"Why are we losing a couple of thousands of dollars?"

"Well it was easier to just work in floats"

"Pack your shit and get the f--- out of here, now."


I'm totally with you. We write/run financial software for government. We MUST be correct to the penny. Our auditors absolutely check this stuff.

We've never done $ math in JS, that just seems like a stupid place to do it. If it's in the browser, you can't trust anything that happens there anyways, as it runs client side, with almost zero security. We happily take $ input via JS on the client, but we just pass it through, and let the server(s) handle the actual math.


> because that type of rounding error can end up in the millions over the course of a year if you deal 100+ million dollar transactions 5-6 times a day.

I think we're talking past each other a bit.

Financial products pricers themselves only have as much precision as the inputs, which in many cases might only have 3-5 significant figures. Using arbitrary precision math doesn't buy you any more precision than what your inputs have.

Remember that a 64-bit float gives you 52 bits ~ 14 decimal digits of precision, so unless you're doing something that has terrible numerical stability, the error of your model assumptions and inputs will always dominate floating point error. It doesn't matter if your model spits out $XXXXXX.08 or $XXXXXX.15 - both answers are essentially the same.

Just to provide context, we were pricing deals with notional amounts in the billions and PVs up to tens of millions.

Note that we're comfortable doing nuclear bomb simulations or climate models using FP64 floating point calculations, for the same reasons — the error in your model is bigger anyway.

> they would have you fired over the superfluous loss of a dollar

The handling of customer accounts is different at a bank since you're not supposed to be creating or losing money out of thin air—that's the Fed's job.

But that's not what the article claims—it claims that any monetary amount, anywhere, should never use a float, which IMO overstates things from both a business and maths standpoint.


It really depends upon the context. If they're talking about reporting and forecasts, floats are almost certainly fine. The back end actual storage is generally always numerics, however.


Depends on the reporting - if it's legally mandated reporting or tax reporting, then it's going to get audited and any rounding mismatches are definitely not okay.


And I had the misfortune for working for a few years with bond modeling software that used floats internally.

Turns out that the first thing a financial analyst does to compare two implementations of a bond securitization model is to look at the pennies. If those tie out, then you're good. But if your software predicts money off by a dime in a billion dollar model deal, the analyst WILL notice and WILL try to get to the bottom of it.

Yeah, floating point roundoff error seems like it should be immaterial. However it can matter a whole lot.


They need some way to validate things, and it turns out if the number isn't exactly the same when they run their samples they don't trust any of them. For at least some value of them.

The founder at the last place I worked at was famously difficult to satisfy because of this. You couldn't even begin to talk about how you were presenting data to him unless you were absolutely sure the numbers in your mockup were correct. He just couldn't see past an "incorrect" value when dealing with faked data ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: