Fixnum division error
I ran into this bug while porting some code from 8 to 16 and 24 bit precision Fixnum.
Dividing a Fixnum by a second Fixnum breaks when using 16 and 24 bit precision (some other precision values work, some break as well, I didn't test all the possible values).
This is specific to Fixnum / Fixnum and does not happen with Fixnum / i32.
Here is the test I ran:
let a: FixedNum<8> = num!(4.0);
let b: FixedNum<8> = num!(2.0);
let c: FixedNum<8> = num!(3.0);
agb::println!("8 bits");
agb::println!("A: {} B: {} C: {}", a, b, c);
agb::println!("A * B = {}", a * b);
agb::println!("A * C = {}", a * c);
agb::println!("A / B = {}", a / b);
agb::println!("A / C = {}", a / c);
agb::println!("A / B.floor() = {}", a / b.floor());
agb::println!("A / C.floor() = {}", a / c.floor());
let a: FixedNum<16> = num!(4.0);
let b: FixedNum<16> = num!(2.0);
let c: FixedNum<16> = num!(3.0);
agb::println!("16 bits");
agb::println!("A: {} B: {} C: {}", a, b, c);
agb::println!("A * B = {}", a * b);
agb::println!("A * C = {}", a * c);
agb::println!("A / B = {}", a / b);
agb::println!("A / C = {}", a / c);
agb::println!("A / B.floor() = {}", a / b.floor());
agb::println!("A / C.floor() = {}", a / c.floor());
let a: FixedNum<24> = num!(4.0);
let b: FixedNum<24> = num!(2.0);
let c: FixedNum<24> = num!(3.0);
agb::println!("24 bits");
agb::println!("A: {} B: {} C: {}", a, b, c);
agb::println!("A * B = {}", a * b);
agb::println!("A * C = {}", a * c);
agb::println!("A / B = {}", a / b);
agb::println!("A / C = {}", a / c);
agb::println!("A / B.floor() = {}", a / b.floor());
agb::println!("A / C.floor() = {}", a / c.floor());
This is the output:
8 bits
A: 4 B: 2 C: 3
A * B = 8
A * C = 12
A / B = 2
A / C = 1.33203125
A / B.floor() = 2
A / C.floor() = 1.33203125
16 bits
A: 4 B: 2 C: 3
A * B = 8
A * C = 12
A / B = 0
A / C = 0
A / B.floor() = 2
A / C.floor() = 1.3333282470703125
24 bits
A: 4 B: 2 C: 3
A * B = 8
A * C = 12
A / B = 0
A / C = 0
A / B.floor() = 2
A / C.floor() = 1.333333313465118408203125
It isn't documented anywhere, nor does the code prevent you from doing so, but using >= 16 bits of precision for 32 bit backing integer isn't going to work well for both multiplication and division.
Alright, I'll use 15 bits for high precision then, thanks!
I'm going to leave this open as a doc issue and adding run time or compile time checks to avoid this