Discrepancy/Bug in the comparison operation
We suspect a possible bug in the reconversion module for double-precision inputs. The bug appears to be in the fNTorecFN module. The two inputs an and b that are being compared in the attached JasperGold snapshot are not equal, and our testbench also computes that they are not equal. However, the reconversion (recoded) module appears to make them identical after the recoding. We were expecting to see a.out and b.out to mirror a and b in terms of staying unequal.
What are the configuration used to instantiate the DUT ? Can you share the source of the testbench ? What is strange is that both conversion results look false in your trace.
$ python3 script_hardfloat_ieee.py --input-format f64 ieeetorecfn 0x0000_0000_0000_0003
0x3cf8000000000000
$ python3 script_hardfloat_ieee.py --input-format f64 ieeetorecfn 0x0000_0002_0000_0003
0x3ef0000000180000
(using https://github.com/nibrunieAtSi5/floatConverter)
It looks like the conversion of 0x0000_0000_0000_0003 is the incorrect one.
The configuration is 64-bit (with 8 expWidth and 24 sigWidth).
The testbench is a formal one. We are comparing the hardfloat model against our formal model. Our models ended up disagreeing due to values of the primary inputs a and b. The root-cause for disagreement is the fntorecfn recoding module in hardfloat. The recoded version of them is the same, which does not make sense. From what we understand from your explanation, it looks like there is a bug in the recoding module, are we correct in our understanding?
I did not say that there was a bug. I am not aware of any known issues. 64-bit with 8-bit exponent and 24-bit significand sounds strange. Why not 11-bit exponent and 53-bit significand ?
Yes, you are right. I meant to write 11-bit exponent and 53 bit significand.
Is the formal test bench written in Chisel ? Can you share the source code ? (Just the parts that wraps the modules and call the conversion functions)
The testbench is using formal assertions. The mismatch in the design and the formal model is getting exposed for the case when the exponent is all zeros (subnormals). On constraining the assert to focus on all cases but subnormals the assertions passes. Happy to share the formal testbench in a conference call tomorrow.
I don't find the module fNTorecFN in the source code, could you point me to it ?
Before doing a conference call, I think it would be good to share the setup so people can inspect it and given the possible timezone difference it may not be very practical (Sorry to ask about sharing code but I'd really like to see the testbench and in particular how and where the conversions are instantiated and configured, the assertions can be shared later).
Our testbench is testing the equivalence of the hardfloat with an SVA model. The hardfloat comparison operation uses recoded values not the IEEE754 , so we've put together a wrapper that accepts the IEEE 754 input values and then tests for FP operations including comparison. The module that we suspect has an issue is the hardfloat rtl module called fNToRecFN which does not produce the correct recoded output for subnormals for 64-bit cases.
fNToRecFN.v is a file in the source rtl folder.
The wrapper looks something like this: module compare #( parameter expWidth = 11, parameter sigWidth = 53 ) ( input [(expWidth + sigWidth-1):0] a, input [(expWidth + sigWidth-1):0] b, input signaling, output lt, output eq, output gt, output unordered, output [4:0] exceptionFlags ); wire [(expWidth + sigWidth):0] a_raw; wire [(expWidth + sigWidth):0] b_raw; fNToRecFN #(expWidth, sigWidth) fNToRecFN_a (.in(a), .out(a_raw)); fNToRecFN #(expWidth, sigWidth) fNToRecFN_b (.in(b), .out(b_raw)); compareRecFN #(expWidth, sigWidth) compareRecFN_called (.a(a_raw), .b(b_raw), .signaling(signaling), .lt(lt), .eq(eq), .gt(gt), .unordered(unordered), .exceptionFlags(exceptionFlags)); endmodule
Ok, I think I am starting to understand, you are referring to the verilog version of hardfloat (from https://www.jhauser.us/arithmetic/HardFloat.html Release 1) ? and not the Chisel version of this repository.
Is this correct ? (I was able to find fNToRecFN.v into https://www.jhauser.us/arithmetic/HardFloat-1.zip but not in this repository)
Thank you for your reply. I will add it to the other repository.
Any update on this? I use the verilog HardFloat, and if there's a discrepancy, I would like to know more about it.