hls4ml icon indicating copy to clipboard operation
hls4ml copied to clipboard

BatchNormalization failded conversion if it is untrainable

Open ChiRuiChen opened this issue 3 years ago • 0 comments

According to keras, gamma and betas are learned parameters in BatchNormalization layers. As a result, they are set to None when the BatchNormalization layer is untrainable.

Currently hls4ml doesn't handle this situation in model/layers.py, so I suggest adding these lines:

if gamma is None:
    gamma = 1
if beta is None:
    beta = 0

Like this:

class BatchNormalization(Layer):
    _expected_attributes = [
        Attribute('n_in'),
        Attribute('n_filt', default=0),

        WeightAttribute('scale'),
        WeightAttribute('bias'),

        TypeAttribute('scale'),
        TypeAttribute('bias'),
    ]

    def initialize(self):
        inp = self.get_input_variable()
        shape = inp.shape
        dims = inp.dim_names
        self.add_output_variable(shape, dims)

        gamma = self.model.get_weights_data(self.name, 'gamma')
        beta = self.model.get_weights_data(self.name, 'beta')
        mean = self.model.get_weights_data(self.name, 'moving_mean')
        var = self.model.get_weights_data(self.name, 'moving_variance')
        
        # if trainable is false, gamma and beta are None initialy
        if gamma is None:
        	gamma = 1
        if beta is None:
        	beta = 0
            
        scale = gamma / np.sqrt(var + self.get_attr('epsilon'))
        bias = beta - gamma * mean / np.sqrt(var + self.get_attr('epsilon'))

        self.add_weights_variable(name='scale', var_name='s{index}', data=scale)
        self.add_weights_variable(name='bias', var_name='b{index}', data=bias)

ChiRuiChen avatar Aug 04 '22 06:08 ChiRuiChen