hls4ml
hls4ml copied to clipboard
GarNet import of hls_math not working
Hi @vloncar @yiiyama @jmduarte ! I’m trying to synthesize a simple GarNet model (one GarNet layer) using the branch this branch. I get an error related to an import of hls_math :
Writing HLS project
Done
In file included from firmware/parameters.h:16:0,
from firmware/myproject.cpp:22:
firmware/nnet_utils/nnet_garnet.h:25:22: fatal error: hls_math.h: No such file or directory
#include "hls_math.h"
The hls math header is included by the garnet layer template here and is used to compute an exponential.
Could this header file be added to the project or the exponential replaced by a look-up table?
Do you see this when trying to hls_model.compile() or hls_model.build()?
In hls_model.compile()
Could you try hls_model.build() ? I think it's because the hls_math.h header file is not copied here. But it should work when you use build, i.e. compile it with Vivado. Then you'll need to write the test data to a file and use csim to test. But you can still do it all from Python
As a quick fix suggested by @vloncar , I am currently pointing the compiler to the Vivado headers (which works). But @vloncar suggested there might be licensing issues with including the header file as a permanent solution
Sorry @thaarres somehow GitHub didn't send me a notification when you mentioned me above or I accidentally deleted the email - anyway I only saw this issue today.
How about adding a compiler flag like -DNO_VIVADO in build_lib.sh? Then I can switch the include in nnet_garnet between hls_math.h and <cmath>. What do people think about this?
I included this -DNO_VIVADO fix in #344
With -DNO_VIVADO we will get different results from python simulation (the compile() feature) and the C simulation/synthesis from Vivado. In other layers we use the lookup tables for expensive math functions, is there a reason that won't work here?
hls_math.h is used for hls::exp() in this code, and the function is called to fill a lookup table, filling controlled by a static boolean flag. So I think the usage pattern is exactly the same as other layers. The reason I didn't use std::exp() was that I saw this in the Vivado reference:
If the HLS math library is used in the C source code, there will be no difference between the C simulation and the C/RTL co-simulation. A C simulation using the HLS math library, may however differ from a C simulation using the standard C math library.
But now that I think about it,
- Because the function call is not in run time, maybe we do get identical results between C sim and co-sim even with
std::exp()? - Even if we don't, I'm OK with using
std::exp()if that's preferable compared to-DNO_VIVADO.
Re: https://github.com/fastmachinelearning/hls4ml/issues/365#issuecomment-898314380 See https://github.com/fastmachinelearning/hls4ml/issues/344#issuecomment-898813561_ for a followup on python API vs csim differences with the current approach.
I agree it would be nice to guarantee these two agree.
@yiiyama it'd be interesting to see if we can just use std::exp() to fill the lookup table and preserve python/csim/cosim matching? Alternatively maybe the issue is just coming from the cast to double before std::exp()?: https://github.com/fastmachinelearning/hls4ml/pull/344/files#r688860255
Fixed by #344