Floating point vs real integers
As of master, this crate interprets i64 as f64 in the Value::into_float() method.
This has edge-cases where i64 can have a value that cannot be represented in f64. Thus, we should change the code to not interpret i64s as f64, IMO.
Testcase: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=83c099f02bc9776ac79ec77aa14d476a
If someone wants to tackle this, feel free to take this patch as a starting point:
https://github.com/matthiasbeyer/config-rs/commit/d07f8a608851476cdcccf034c4602ec3fece30de
Hello! I'm just passing by, this repo piqued my interest :smile:
- Is this issue still relevant?
- I'm not sure I totally understand the issue here @matthiasbeyer. The case you showed in your test is equivalent to doing so:
println!("{}", 9223372036854775807_f64); // prints 9223372036854776000This is on the user to take into consideration IMO. If I put a large number in my config and decide to read it as a float, its my fault that I get the wrong value. The same test you've written would pass if you changec.get_floattoc.get_int. Wdyt?
Well the original issue is that i64 is interpreted as f64, which is, as far as I understand, not correct. I'm not really sure what we can do, how much it is relevant, or even if we can do anything sensible. Most of the time, this might be irrelevant, of course. I wanted to have it documented though (esp. if someone feels they can improve the current situation in any way), that's why the issue exists.
mmm I see, thanks for the reply! So I don't think I will do anything about it, but I do have another idea in mind, I guess I'll open another issue for it :)
So currently there doesn't seem to be an easy way to check whether an integer is an invalid float. Even num's from_i64 which returns an option still returns Some(...). The best option that I can think of would be casting it back to an i64 and seeing if that is different.
@conradludgate this can actually be checked based on IEEE-754. Quoting officia Rust docs for f64:
A 64-bit floating point type (specifically, the “binary64” type defined in IEEE 754-2008).
TL;DR is that f64 (binary64 in 754 terminology) has 53 significant bits, plus a sign bit. So any i64 outside the range +- 2**53 will lose precision when converted to f64. The exact range would need more detailed calculation, but this is the gist of it.
Looking further, there's an f64::MANTISSA_DIGITS associated constant.