hls4ml
hls4ml copied to clipboard
keras tf layer Dot operation not supported?
I am trying to make a GCN model that is compatible to this package. I trying to use tf.keras.layers.Dot(axes=-1) as the Adjacency matrix for my GCN model, and I have added it in my custom keras Model. To be fair, I am done some funky things with it to match the to_json() requirements, so that the ls4ml.converters.convert_from_keras_model() would work. Just like in the tutorial, I ran
config = hls4ml.utils.config_from_keras_model(customModel, granularity='model') print(config) print("-----------------------------------") print("Configuration") plotting.print_dict(config) print("-----------------------------------") hls_model = hls4ml.converters.convert_from_keras_model(customModel, hls_config=config, output_dir='model_test/hls4ml_prj', fpga_part='xcu250-figd2104-2L-e')
But the notebook is giving an error on the line that says "pga_part='xcu250-figd2104-2L-e')" with the message "~/anaconda3/envs/hls4ml/lib/python3.7/site-packages/hls4ml/converters/keras/merge.py in parse_merge_layer(keras_layer, input_names, input_shapes, data_reader, config) 21 rank = len(input_shapes[0][1:]) 22 if rank > 1: ---> 23 raise Exception('ERROR: Dot of tensors with rank > 1 is not yet supported.') 24 layer['op'] = layer['class_name'].lower() + '{}d'.format(rank) 25 else:
Exception: ERROR: Dot of tensors with rank > 1 is not yet supported."
Is this an inherent inability with MatMul operation within hls4ml package or is there something I am overlooking, and it's actually possible to make a GCN that's compatible with hls4ml?
Here's my custom tf keras model if anyone's curious:
class Model(tf.keras.Model): def init(self, feat_n): super(Model, self).init(name= "Model") self.bn_ = tf.keras.layers.BatchNormalization() self.node_embed_ = tf.keras.layers.Dense(feat_n) self.message_passing_ = tf.keras.layers.Dot(axes=-1) self.input_ = tf.keras.layers.InputLayer(input_shape=(3,3))
def call(self, input_tensor,node_adj, training=False):
# message passing
input_tensor = self.input_(input_tensor)
x = self.message_passing_([node_adj, input_tensor])
# embedding
x = self.node_embed_(x)
x = self.bn_(x, training=training)
return x
def get_config(self):
layer_configs = []
config = {"class_name": "InputLayer"}
config["config"] = self.input_.get_config()
config["inbound_nodes"] = []
layer_configs.append(config)
config = {"class_name": "Dot"}
config["config"] = self.message_passing_.get_config()
config["inbound_nodes"] = [[[self.input_.get_config()["name"], 0, 0, {}]]]
layer_configs.append(config)
config = {"class_name": "Dense"}
config["config"] = self.node_embed_.get_config()
config["inbound_nodes"] = [[[self.message_passing_.get_config()["name"], 0, 0, {}]]]
layer_configs.append(config)
config = {"class_name": "BatchNormalization"}
config["config"] = self.bn_.get_config()
config["inbound_nodes"] = [[[self.bn_.get_config()["name"], 0, 0, {}]]]
layer_configs.append(config)
config = {
'name': self.name,
'layers': copy.deepcopy(layer_configs),
'input_layers': [[self.input_.get_config()["name"], 0, 0]],
'output_layers': [[self.bn_.get_config()["name"], 0, 0]]
}
return config
hi @green-cabbage, we don't currently support the Dot
operation, but this would be nice to add.
Could you contribute a PR? We could provide some guidance.
@jmduarte that would be great! However, I have since moved to supporting Pytorch Geometric models like pyg_to_hls. I have already talked with Mr Abd Elabd, but if you know anyone else would can provide me some guidance, I would greatly appreciate it.
Hi @green-cabbage, have you been able to successfully deploy your GCN model? I am also trying to implement a GCN model using hls4ml, but no success so far :/