ByteFormatter
ByteFormatter copied to clipboard
"Precision" is the total number of significant digits in a number (e.g. for 1.234, precision = 4, not 3). What you're using it for (decimal places) is usually called the "scale"
As pointed out on reddit: https://www.reddit.com/r/PHP/comments/3yq3yx/byteformatter_is_a_psr2_compliant_library_that/cygfps8
Is there a good source to verify this?
Compare http://www.thefreedictionary.com/precision and http://www.thefreedictionary.com/scale
I'm not sure if a dictionary is appropriate here since we're talking about mathematical usage of terms rather than English usage.
@drrcknlsn Thank you for the large number of sources. In the database world this represents a clear consensus of terminology.
Furthermore, I found an article on Numerical Precision from Wolfram which uses the terms precision and accuracy. These are equivalent to precision and scale as used in the database world. More to the point, it is clear that precision is never used to refer to digits to the right of the decimal point so this bug is now confirmed.
FYI, this kind of relates to the number_format()
suggestion i had. Presuming that something to that effect will go in, this change seems like it'd be intertwined with it
btw, one can probably be forgiven for using 'precision' here, since PHP itself uses it the same way
@okdana That's interesting. Also, Wolfram uses the term accuracy, but accuracy is just a synonym for precision anyway. I think this places the terminology under dispute once more until more sources can authoritatively declare the correct naming.
I personally use 'precision' in conversation to describe number of decimal places, but i have a very poor grasp of mathematics, so idk. I did some Google research and i found the following:
I could only find a few specs that use 'precision' un-ambiguously to refer to decimal places:
- PHP's
round()
, as previously mentioned - In
dc
, the 'precision value' sets the 'number of fraction digits'
Several are ambiguous, inconsistent, or irrelevant:
- The POSIX C
printf()
specification uses 'precision' for both digits before radix (for%i
) and digits after radix (for%f
) - Almost all other languages with
printf()
or a similar (PHP, Perl, Python, Ruby) use the POSIX terminology for those features - Perl's
Math::BigFloat
uses 'precision' similarly to POSIX
These side-step the issue entirely:
- PHP's
number_format()
uses$decimals
- Python's
round()
usesndigits
- Ruby's
Float.round()
usesndigits
- JavaScript's
Number.prototype.toFixed()
usesdigits
- Java's
NumberFormat
and friends use 'fraction digits' to refer to digits after the radix - Microsoft languages (e.g.
Math.Round()
in C#) usedecimals
These use 'scale' for digits after the radix:
- As @drrcknlsn mentioned, seemingly all RDMBSes
- Java's
BigDecimal
(c.f.setScale()
) -
bc
, as well as its PHP bindings (c.f.bcscale()
)
I think it's probably valid outside of a formal arithmetic context to use 'precision' to refer to number of decimal places, but now that i know the 'proper' term, i would probably have used 'scale' instead, personally.