Resource limits for BigDecimal?
Should we specify resource limits, like maximum number of digits, or maximum exponent? My intuition is "no", this should follow how strings and BigInts work, of leaving these implementation-specified. Fabrice Bellard said,
If infinite precision is kept, the standard should say something about the maximum and minimum allowed exponents. Unbounded exponents should not be allowed as it complicates the implementation for little benefit. One option is to tell that the exponent range is implementation defined with a minimum range. Another option is to force a range so that there is no implementation specific case.
For BigInt it is like a memory limit so I see no problem with it. It is like limiting the maximum size of strings. For BigDecimal it is different because large exponents do not use more memory hence the limitation is arbitrary. Having it implementation defined is the simplest of course.
I also have the intuition that BigDecimal should follow Strings and BigInts there and be implementation-specified. Let's imagine we support BigDecimal("0.121..."), how would we handle a given string that is not within limits we specified?
Chiming in to this very old issue with a note of how the current state of affairs relates to this:
The issue of resource limits is mainly connected to the BigDecimal data model for decimals. We are currently pursuing Decimal128. Since this is a fixed bit-width approach (all values take up 128 bits), we have pretty reasonable resource limits from the get-go.