hls4ml icon indicating copy to clipboard operation
hls4ml copied to clipboard

HLS configuration causes weights of the resulting model to be all zeros reducing the accuracy of the HLS model

Open wilfredkisku opened this issue 2 years ago • 2 comments

Creating an HLS configuration of the baseline CNN model results in all weights being zero as printed, I suppose this is causing the accuracy to drop steeply from 75% to 10%.

Is there anything that I am doing wrong or missing?

Creating HLS model
Profiling weights (before optimization)
Weights for conv2d are only zeros, ignoring.
Weights for batch_normalization are only zeros, ignoring.
Weights for conv2d_1 are only zeros, ignoring.
Weights for batch_normalization_1 are only zeros, ignoring.
Weights for conv2d_2 are only zeros, ignoring.
Weights for batch_normalization_2 are only zeros, ignoring.
Weights for conv2d_3 are only zeros, ignoring.
Weights for batch_normalization_3 are only zeros, ignoring.
Weights for conv2d_4 are only zeros, ignoring.
Weights for batch_normalization_4 are only zeros, ignoring.
Weights for conv2d_5 are only zeros, ignoring.
Weights for batch_normalization_5 are only zeros, ignoring.
Weights for conv2d_6 are only zeros, ignoring.
Weights for batch_normalization_6 are only zeros, ignoring.
Weights for conv2d_7 are only zeros, ignoring.
Weights for batch_normalization_7 are only zeros, ignoring.
Weights for output_dense are only zeros, ignoring.
Profiling weights (final / after optimization)
Weights for conv2d are only zeros, ignoring.
Weights for batch_normalization are only zeros, ignoring.
Weights for conv2d_1 are only zeros, ignoring.
Weights for batch_normalization_1 are only zeros, ignoring.
Weights for conv2d_2 are only zeros, ignoring.
Weights for batch_normalization_2 are only zeros, ignoring.
Weights for conv2d_3 are only zeros, ignoring.
Weights for batch_normalization_3 are only zeros, ignoring.
Weights for conv2d_4 are only zeros, ignoring.
Weights for batch_normalization_4 are only zeros, ignoring.
Weights for conv2d_5 are only zeros, ignoring.
Weights for batch_normalization_5 are only zeros, ignoring.
Weights for conv2d_6 are only zeros, ignoring.
Weights for batch_normalization_6 are only zeros, ignoring.
Weights for conv2d_7 are only zeros, ignoring.
Weights for batch_normalization_7 are only zeros, ignoring.
Weights for output_dense are only zeros, ignoring.

wilfredkisku avatar Jun 08 '22 13:06 wilfredkisku

Hi have you solve this problem?

liuhao-97 avatar Jul 03 '22 06:07 liuhao-97

Hi @liuhao-97, I have not yet solved it. This accuracy drop is for a "skip connection" based architecture. Last I found in one of the hls4ml community threads that "merge" related operations (Concatenate and Add) have been supported in hls4ml.

Please do help if you have insights on the issue that I am facing.

wilfredkisku avatar Jul 03 '22 06:07 wilfredkisku