deno_std icon indicating copy to clipboard operation
deno_std copied to clipboard

feat(assert): `assertAlmostEquals()` sets useful tolerance automatically

Open bradenmacdonald opened this issue 11 months ago • 3 comments

I was using assertAlmostEquals in my project and found that its behavior when dealing with small numbers is not very useful by default. For example, neither of the following will throw, despite having very different values:

assertAlmostEquals(5e-7, 6e-7); // approx 20% different value
assertAlmostEquals(1e-8, 1e-9); // different order of magnitude

This definition of "almost equal" for numbers that have a different order of magnitude doesn't match my intuition about how such an assert should work.

If the goal of the assertion is to "take into account IEEE-754 double-precision floating-point representation limitations", then it is not doing that by default - you have to manually specify a precision based on the expected order of magnitude of the values that you're comparing. For projects that have to deal with a wide range of values, this is not ideal. Why don't we make the default "auto-detect" a useful precision range?

This proposal changes the behavior of assertAlmostEquals so that its default tolerance is not an absolute numeric value, but is one hundred thousandth of a percent of the expected value. To make this change largely backwards compatible, you can still specify an absolute tolerance if you want to. But I think this makes the default version much more useful and avoids some footguns.

All of the existing tests for this assert continue to pass unchanged, but the newly added tests only pass with this feature/fix in place. I'm not sure if this is considered a breaking change or not.

bradenmacdonald avatar Mar 10 '24 20:03 bradenmacdonald

CLA assistant check
All committers have signed the CLA.

CLAassistant avatar Mar 10 '24 20:03 CLAassistant

This change makes sense to me. However because this is a breaking change, we're planning to land this at v1.0

kt3k avatar Mar 25 '24 04:03 kt3k

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 92.12%. Comparing base (b9374d3) to head (8fdc153).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #4460   +/-   ##
=======================================
  Coverage   92.12%   92.12%           
=======================================
  Files         487      487           
  Lines       38771    38774    +3     
  Branches     5388     5391    +3     
=======================================
+ Hits        35718    35721    +3     
  Misses       2997     2997           
  Partials       56       56           

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

codecov[bot] avatar Apr 29 '24 11:04 codecov[bot]

one hundred thousandth of a percent of the expected value

I like the idea of improved ergonomics, but can you explain the motivation behind this default value? I think it's important to have documented justification for defaults instead of offering opaque and seemingly arbitrary values for consumers.

jsejcksn avatar Jun 06 '24 11:06 jsejcksn

Sounds a good and interesting point. I searched a bit of the other standard libraries. Python has math.isclose in its standard library and it seems having the default of relative tolerance of 1e-9 (instead of 1e-7) https://docs.python.org/3/library/math.html#math.isclose

kt3k avatar Jun 06 '24 14:06 kt3k

can you explain the motivation behind this default value?

1e-7 "felt" like a value that would hide most tiny rounding errors while catching most math/computation/formula issues. I don't really have any justification for what's ultimately an arbitrary default. Using 1e-9 for similarity to python would be fine too, but I think theirs is also just an arbitrary value - they don't provide any justification for it that I can see.

bradenmacdonald avatar Jun 06 '24 17:06 bradenmacdonald

I agree - we should document the justification somewhere.

1e-7 "felt" like a value that would hide most tiny rounding errors while catching most math/computation/formula issues.

Even something similar to this might suffice as a justification.

iuioiua avatar Jun 06 '24 20:06 iuioiua

Toward topical research — I found this quite insightful article which examines the complexities of comparing floating point numbers of varying magnitudes: Comparing Floating Point Numbers, 2012 Edition

jsejcksn avatar Jun 07 '24 17:06 jsejcksn

This change seems to cause a failure when comparing two negative numbers.

See #4994

babiabeo avatar Jun 09 '24 02:06 babiabeo