blueoil
blueoil copied to clipboard
lm_fpga.elf of LMBiSeNetQuantize doesn't work
lm_fpga.elf doesn't work well with LMBiSeNetQuantize. The program gets stucked (freezed) when we run it on FPGA.
The error point is:
Conv2D_struct.input_height = 1;
Conv2D_struct.input_width = 1;
Conv2D_struct.kernel_height = 1;
Conv2D_struct.kernel_width = 1;
Conv2D_struct.kernel_depth = 512;
Conv2D_struct.kernel_elements = 512;
Conv2D_struct.output_channels = 512;
Conv2D_struct.output_height = 1;
Conv2D_struct.output_width = 1;
Conv2D_struct.padding = 0;
Conv2D_struct.stride_along_height = 1;
Conv2D_struct.stride_along_width = 1;
Conv2D_struct.temporary_buf = qconv_tmp_buffer.get();
binConv2D_struct.normal_conv_params = Conv2D_struct;
binConv2D_struct.bin_input_extra_bits = 0;
binConv2D_struct.bin_input_bitwidth = 2;
binConv2D_struct.bin_kernel_ndata = 8192;
binConv2D_struct.bin_input_nwords = 8192;
binConv2D_struct.bin_input_ndata = 8192*2;
binConv2D_struct.device_input_buf = device_input_buf;
binConv2D_struct.device_output_buf = device_output_buf;
binConv2D_struct.thresholds = nullptr;
binConv2D_struct.n_bit = 2;
binConv2D_struct.max_value = 2.0;
binConv2D_struct.debug_name = "context_merge_attention_32_conv_conv2d_Conv2D";
#ifdef RUN_ON_FPGA
binConv2D_struct.device_kernel_phys_addr = KERNEL_ADDR + context_merge_attention_32_conv_conv2d_Conv2D_kernel_offset;
binConv2D_struct.device_thresholds_phys_addr = 0;
#endif
func_QuantizedConv2D(context_merge_attention_32_QTZ_linear_mid_tread_half_output, context_merge_attention_32_conv_conv2d_kernel_1_BinaryMeanScalingQuantizer_new_output, context_merge_attention_32_conv_conv2d_Conv2D_Y, scaling_factors::context_merge_attention_32_conv_conv2d_Conv2D, binConv2D_struct);
So we can find out the error occurs at attention block of LMBiSeNet.
The error has disappeared if the NETWORK.USE_ATTENTION_REFINEMENT = False
is set in configuration as option.
Should we improve this ?