hls4ml icon indicating copy to clipboard operation
hls4ml copied to clipboard

Bad Accuracy when converting QKeras Model

Open LordScarface opened this issue 3 years ago • 2 comments

Hello again,

I'm trying to convert a QKeras (or after that AutoQKeras) Model using hls4ml. Now the unquantized Model converts just fine and the Accuracy of the converted HLS Model is the same as the input Keras Model. When I now try to convert the QKeras Model, Accuracy of the HLS Model is ~10% (vs. ~99% for the quantized QKeras Model).

The Models I used for testing (MNIST Dataset): - Keras Model ~99% Accuracy when converted with hls4ml - QKeras Model ~10% Accuracy when converted with hls4ml

My Code for loading and evaluating the Models:

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import *
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras.autoqkeras import *
from qkeras import *
from qkeras.utils import model_quantize
from qkeras.qtools import run_qtools
from qkeras.qtools import settings as qtools_settings

from sklearn.metrics import accuracy_score

import hls4ml

from qkeras.utils import _add_supported_quantized_objects

def print_dict(d, indent=0):
    align=20
    for key, value in d.items():
        print('  ' * indent + str(key), end='')
        if isinstance(value, dict):
            print()
            print_dict(value, indent+1)
        else:
            print(':' + ' ' * (20 - len(key) - 2 * indent) + str(value))

def get_dataset():
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    
    x_train = (x_train/255.0).astype(np.float32)
    x_test = (x_test/255.0).astype(np.float32)
    
    x_train = x_train.reshape(x_train.shape[0], 28,28,1)
    x_test = x_test.reshape(x_test.shape[0], 28,28,1)
        
    y_train = to_categorical(y_train, 10)
    y_test = to_categorical(y_test, 10)
    
    return (x_train, y_train), (x_test, y_test)

(x_train, y_train), (x_test, y_test) = get_dataset()

co = {}
_add_supported_quantized_objects(co)

# hls4ml_input_model = tf.keras.models.load_model('keras_mnist.h5', custom_objects=co)
hls4ml_input_model = tf.keras.models.load_model('qkeras_mnist_quant.h5', custom_objects=co)

cfg = hls4ml.utils.config_from_keras_model(hls4ml_input_model, granularity='name')

for layer in cfg['LayerName'].keys():
    cfg['LayerName'][layer]['Trace'] = True

print_dict(cfg)

hls_model = hls4ml.converters.convert_from_keras_model(hls4ml_input_model,
                                                       hls_config=cfg,
                                                       io_type='io_stream',
                                                       output_dir='mnist_qkeras', # or mnist_keras
                                                       part='xczu7ev-ffvc1156-2-e')
hls_model.compile()

from sklearn.metrics import accuracy_score

y_hls = hls_model.predict(np.ascontiguousarray(x_test))

print("hls4ml Accuracy: {}".format(accuracy_score(np.argmax(y_test, axis=1), np.argmax(y_hls, axis=1))))

LordScarface avatar Sep 16 '21 18:09 LordScarface

Hi, have you tried using the profiling & tracing tools in hls4ml to pinpoint the first layer that has an issue? Perhaps you can include some of the plots which are generated. If you haven't used them before they are introduced in the tutorial part 2, with documentation here. The compare method in profiling may also be helpful (documentation here).

thesps avatar Sep 27 '21 11:09 thesps

Hi and thank you for the reply!

I tried using the profiling tools, I think in particular you are referring to the hls4ml.model.profiling.compare(...) method? Since my last post I build a new smaller Model which has the same Problem so I'll use that here. The Models I used:

I'm using the release version 0.5.0 of hls4ml.

Model Summary of Keras Model:

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 26, 26, 16)        160       
_________________________________________________________________
activation (Activation)      (None, 26, 26, 16)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 24, 24, 2)         290       
_________________________________________________________________
activation_1 (Activation)    (None, 24, 24, 2)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
dense (Dense)                (None, 10)                11530     
_________________________________________________________________
activation_2 (Activation)    (None, 10)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 10)                110       
_________________________________________________________________
activation_3 (Activation)    (None, 10)                0         
=================================================================
Total params: 12,090
Trainable params: 12,090
Non-trainable params: 0
_________________________________________________________________

Model Summary of AutoQKeras Model:

Model: "functional_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 28, 28, 1)]       0         
_________________________________________________________________
conv2d (QConv2D)             (None, 26, 26, 16)        160       
_________________________________________________________________
activation (QActivation)     (None, 26, 26, 16)        0         
_________________________________________________________________
conv2d_1 (QConv2D)           (None, 24, 24, 2)         290       
_________________________________________________________________
activation_1 (QActivation)   (None, 24, 24, 2)         0         
_________________________________________________________________
flatten (Flatten)            (None, 1152)              0         
_________________________________________________________________
dense (QDense)               (None, 10)                11530     
_________________________________________________________________
activation_2 (QActivation)   (None, 10)                0         
_________________________________________________________________
dense_1 (QDense)             (None, 10)                110       
_________________________________________________________________
activation_3 (Activation)    (None, 10)                0         
=================================================================
Total params: 12,090
Trainable params: 12,090
Non-trainable params: 0
_________________________________________________________________

For both Models input and output accuracy are degraded (by the same amount (?)), here are the Results of the compare profiling:

Keras Model (96.03%) vs. Keras HLS Model (80.8%): keras_dist-diff keras_norm-diff

AutoQKeras Model (96.16%) vs. AutoQKeras HLS Model (80.8%): keras_autoQ_dist-diff keras_autoQ_norm-diff

I'll also attach the code I used for testing:

from tensorflow.keras.datasets import mnist
from tensorflow.keras.layers import *
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import *
from tensorflow.keras.utils import to_categorical
import numpy as np

from qkeras.autoqkeras import *
from qkeras import *
from qkeras.utils import model_quantize
from qkeras.qtools import run_qtools
from qkeras.qtools import settings as qtools_settings

from sklearn.metrics import accuracy_score

import hls4ml

from qkeras.utils import _add_supported_quantized_objects

def print_dict(d, indent=0):
    align=20
    for key, value in d.items():
        print('  ' * indent + str(key), end='')
        if isinstance(value, dict):
            print()
            print_dict(value, indent+1)
        else:
            print(':' + ' ' * (20 - len(key) - 2 * indent) + str(value))

def get_dataset():
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    
    x_train = (x_train/255.0).astype(np.float32)
    x_test = (x_test/255.0).astype(np.float32)
    
    x_train = x_train.reshape(x_train.shape[0], 28,28,1)
    x_test = x_test.reshape(x_test.shape[0], 28,28,1)
        
    y_train = to_categorical(y_train, 10)
    y_test = to_categorical(y_test, 10)
    
    return (x_train, y_train), (x_test, y_test)

(x_train, y_train), (x_test, y_test) = get_dataset()

co = {}
_add_supported_quantized_objects(co)

model_prefix = 'keras_autoQ_' # or keras_

hls4ml_input_model = tf.keras.models.load_model(model_prefix+'mnist.h5', custom_objects=co)

cfg = hls4ml.utils.config_from_keras_model(hls4ml_input_model, granularity='name')

for layer in cfg['LayerName'].keys():
    cfg['LayerName'][layer]['Trace'] = True

print_dict(cfg)

hls_model = hls4ml.converters.convert_from_keras_model(hls4ml_input_model,
                                                       hls_config=cfg,
                                                       io_type='io_stream',
                                                       output_dir='mnist_qkeras', # or mnist_keras
                                                       fpga_part='xczu7ev-ffvc1156-2-e')
#hls_model.compile()

from sklearn.metrics import accuracy_score
from hls4ml.model.profiling import numerical
import matplotlib.pyplot as plt
import yaml

hls4ml_input_model.summary()

hls4ml.model.profiling.compare(keras_model=hls4ml_input_model, hls_model=hls_model, X=x_test, plot_type='norm_diff')
plt.savefig(model_prefix+'norm-diff.png')

hls4ml.model.profiling.compare(keras_model=hls4ml_input_model, hls_model=hls_model, X=x_test, plot_type='dist_diff')
plt.savefig(model_prefix+'dist-diff.png')

hls_model.compile()

x_test_reduced = x_test[:3000]
y_test_reduced = y_test[:3000]

y_predict_aq        = hls4ml_input_model.predict(x_test_reduced)
y_predict_hls4ml_aq = hls_model.predict(np.ascontiguousarray(x_test_reduced))

accuracy_keras  = float(accuracy_score (np.argmax(y_test_reduced,axis=1), np.argmax(y_predict_aq,axis=1)))
accuracy_hls4ml = float(accuracy_score (np.argmax(y_test_reduced,axis=1), np.argmax(y_predict_hls4ml_aq,axis=1)))

print("Accuracy Model input:  {}".format(accuracy_keras))
print("Accuracy hls4ml: {}".format(accuracy_hls4ml))

LordScarface avatar Sep 27 '21 15:09 LordScarface