hls4ml
hls4ml copied to clipboard
`TypeError` occurs when calling `hls_model.compile()`
Quick summary
I'm currently trying to create an HLS project for the PYNQ-ZU starting from the DeepTrack package. Specifically I'm attempting to implement as a starting point the single particle tracking example within the package. After defining the configuration and the HLS model though, compile()
generates a TypeError
.
Details
- I replicate in a separate notebook the entirety of the single particle tracking example model created with DeepTrack.
- I define the configuration and the HLS model using as
part
the part name of the Zynq UltraScale+ chip on the PYNQ-ZU board.
Using hls4ml.utils.plot_model(hls_model, show_shapes=True, show_precision=True, to_file=None)
the plotted model is shown as follows:
Steps to Reproduce
-
pip install deeptrack
-
pip install hls4ml
- Create a Jupyter notebook starting from the single particle example (link in the quick summary)
- Add the following code as next steps:
import plotting
from hls4ml.converters import convert_from_keras_model
from hls4ml.utils import config_from_keras_model
from pprint import pprint
config = config_from_keras_model(model=model, granularity="model")
hls_model = convert_from_keras_model(
model=model,
output_dir="./deeptrack_model/hls4ml_prj",
project_name="deep_track_spt",
part="XCZU5EG-SFVC784"
)
hls_model.compile()
Expected behavior
The model should compile without problems.
Actual behavior
The current traceback is generated:
Writing HLS project
Output exceeds the [size limit](command:workbench.action.openSettings?[). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?1688e68f-2cf1-45f7-b1b7-a2a874d20238)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/home/jacopo/workspace/hls4ml/deeptrack_with_hls4ml.ipynb Cell 23 in <cell line: 1>()
----> [1](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/deeptrack_with_hls4ml.ipynb#ch0000022vscode-remote?line=0) hls_model.compile()
File ~/.local/lib/python3.8/site-packages/hls4ml/model/hls_model.py:526, in HLSModel.compile(self)
525 def compile(self):
--> 526 self.write()
528 curr_dir = os.getcwd()
529 os.chdir(self.config.get_output_dir())
File ~/.local/lib/python3.8/site-packages/hls4ml/model/hls_model.py:523, in HLSModel.write(self)
519 return ''.join(choice(hexdigits) for m in range(length))
521 self.config.config['Stamp'] = make_stamp()
--> 523 self.config.writer.write_hls(self)
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_writer.py:682, in VivadoWriter.write_hls(self, model)
680 self.write_build_script(model)
681 self.write_nnet_utils(model)
--> 682 self.write_yml(model)
683 self.write_tar(model)
684 print('Done')
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_writer.py:660, in VivadoWriter.write_yml(self, model)
657 pass
659 with open(model.config.get_output_dir() + '/' + config_filename, 'w') as file:
--> 660 yaml.dump(model.config.config, file)
File /usr/lib/python3/dist-packages/yaml/__init__.py:200, in dump(data, stream, Dumper, **kwds)
195 def dump(data, stream=None, Dumper=Dumper, **kwds):
196 """
197 Serialize a Python object into a YAML stream.
198 If stream is None, return the produced string instead.
199 """
--> 200 return dump_all([data], stream, Dumper=Dumper, **kwds)
File /usr/lib/python3/dist-packages/yaml/__init__.py:188, in dump_all(documents, stream, Dumper, default_style, default_flow_style, canonical, indent, width, allow_unicode, line_break, encoding, explicit_start, explicit_end, version, tags)
186 dumper.open()
187 for data in documents:
--> 188 dumper.represent(data)
189 dumper.close()
190 finally:
File /usr/lib/python3/dist-packages/yaml/representer.py:26, in BaseRepresenter.represent(self, data)
25 def represent(self, data):
---> 26 node = self.represent_data(data)
27 self.serialize(node)
28 self.represented_objects = {}
File /usr/lib/python3/dist-packages/yaml/representer.py:47, in BaseRepresenter.represent_data(self, data)
45 data_types = type(data).__mro__
46 if data_types[0] in self.yaml_representers:
---> 47 node = self.yaml_representers[data_types[0]](self, data)
48 else:
49 for data_type in data_types:
File /usr/lib/python3/dist-packages/yaml/representer.py:205, in SafeRepresenter.represent_dict(self, data)
204 def represent_dict(self, data):
--> 205 return self.represent_mapping('tag:yaml.org,2002:map', data)
File /usr/lib/python3/dist-packages/yaml/representer.py:116, in BaseRepresenter.represent_mapping(self, tag, mapping, flow_style)
114 for item_key, item_value in mapping:
115 node_key = self.represent_data(item_key)
--> 116 node_value = self.represent_data(item_value)
117 if not (isinstance(node_key, ScalarNode) and not node_key.style):
118 best_style = False
File /usr/lib/python3/dist-packages/yaml/representer.py:51, in BaseRepresenter.represent_data(self, data)
49 for data_type in data_types:
50 if data_type in self.yaml_multi_representers:
---> 51 node = self.yaml_multi_representers[data_type](self, data)
52 break
53 else:
File /usr/lib/python3/dist-packages/yaml/representer.py:340, in Representer.represent_object(self, data)
337 function_name = '%s.%s' % (function.__module__, function.__name__)
338 if not args and not listitems and not dictitems \
339 and isinstance(state, dict) and newobj:
--> 340 return self.represent_mapping(
341 'tag:yaml.org,2002:python/object:'+function_name, state)
342 if not listitems and not dictitems \
343 and isinstance(state, dict) and not state:
344 return self.represent_sequence(tag+function_name, args)
File /usr/lib/python3/dist-packages/yaml/representer.py:116, in BaseRepresenter.represent_mapping(self, tag, mapping, flow_style)
114 for item_key, item_value in mapping:
115 node_key = self.represent_data(item_key)
--> 116 node_value = self.represent_data(item_value)
117 if not (isinstance(node_key, ScalarNode) and not node_key.style):
118 best_style = False
File /usr/lib/python3/dist-packages/yaml/representer.py:51, in BaseRepresenter.represent_data(self, data)
49 for data_type in data_types:
50 if data_type in self.yaml_multi_representers:
---> 51 node = self.yaml_multi_representers[data_type](self, data)
...
--> 315 reduce = data.__reduce_ex__(2)
316 elif hasattr(data, '__reduce__'):
317 reduce = data.__reduce__()
TypeError: cannot pickle 'weakref' object
System and version information
OS: Ubuntu 18.04 (WSL2 on Windows 10) Python: 3.8.13 hls4ml: 0.6.0
Hi, I don't have the answer to your issue unfortunately, but others might.
However, since you mention Pynq-ZU, would you be willing to help develop and test the support for that board with VivadoAccelerator backend? The main thing would be to make a version of this tcl script for that pynq-zu board. I think probably only 2-3 lines need to change. I could have a guess at it and make a branch for you to test, but if you would already know what to do you could try it yourself. Some testing help would be appreciated since I don't have one of these boards to test on.
And then of course we need to solve your issue so you have a model to test...
Hi, I don't have the answer to your issue unfortunately, but others might.
Hi @thesps , thanks for the reply. Actually I may have found the reason why it generated that error: I skimmed a bit more thoroughly through the example notebooks and realized that the model I was trying to infer was including convolutional layers and the IOType
was not set to io_stream
but to the default io_parallel
. I'm not sure if that's the reason but after rearranging a bit the workflow now I was able to correctly generate a project without errors.
However, since you mention Pynq-ZU, would you be willing to help develop and test the support for that board with VivadoAccelerator backend? The main thing would be to make a version of this tcl script for that pynq-zu board. I think probably only 2-3 lines need to change. I could have a guess at it and make a branch for you to test, but if you would already know what to do you could try it yourself. Some testing help would be appreciated since I don't have one of these boards to test on.
As for this point I would gladly help, but I would be grateful if you could indicate me how to correctly perform the testing since I'm just getting started on using hls4ml.
While I'm at it: the Vivado version I have is 2022.1, and as I've understood in this version Vivado HLS is now called Vitis HLS and it's an entirely different program. I understand that officially version 2022.1 is not supported but do you think I could still try to synthetize my model with it, if I make some changes?
You'll need to make quite some changes to get it working with Vitis HLS. Good news is that the Vitis support is nearing completion, and may come as early as next week if you want to test it :wink:
@vloncar thanks for the update, good to know! Then I'll try to get hold of a Vivado HLS version compatible with hls4ml. Like I said I would be more than happy to contribute to the project (which I find awesome), I just need some pointers on when to begin for testing :)
I've pushed the prototype Pynq ZU support to the pynq-zu branch. You can install it directly like pip install git+https://github.com/fastmachinelearning/hls4ml@pynq-zu
. In order to test it, you should be able to work through the hls4ml tutorial part 7 "deployment".
You would need to change the board in the convert step from pynq-z2 to pynq-zu:
hls_model = hls4ml.converters.convert_from_keras_model(model,
hls_config=config,
output_dir='model_3/hls4ml_prj_pynq',
backend='VivadoAccelerator',
board='pynq-zu')
and the API to make a bitfile changed a little compared to the hls4ml release, so the line hls4ml.templates.VivadoAcceleratorBackend.make_bitfile(hls_model)
can be removed, and the preceding line changed to hls_model.build(csim=False, export=True, bitfile=True)
(add bitfile=True
). All of what follows of how to run it on the board should be the same, but I don't have one of those to test on.
@thesps thanks, I've already started looking into that. Meanwhile, I managed to download Vivado HLS and started the building of my model. Now the synthesis fails due to this error:
INFO: [HLS 200-489] Unrolling loop 'Product1' (firmware/nnet_utils/nnet_dense_latency.h:85) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 64u>, config10>' completely with a factor of 288. ERROR: [XFORM 203-504] Stop unrolling loop 'Product1' (firmware/nnet_utils/nnet_dense_latency.h:85) in function 'nnet::conv_2d_cl<nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 32u>, nnet::array<ap_fixed<16, 6, (ap_q_mode)5, (ap_o_mode)3, 0>, 64u>, config10>' because it may cause large runtime and excessive memory usage due to increase in code size. Please avoid unrolling the loop or form sub-functions for code in the loop body.
I'm not exactly sure what to make of this.
Add io_type='io_stream'
to convert_from_keras_model
Add
io_type='io_stream'
toconvert_from_keras_model
I created the configuration as follows:
cfg = hls4ml.converters.create_config(backend='Vivado', project_name="deeptrack_hls")
cfg['IOType'] = 'io_stream' # Must set this if using CNNs!
cfg['HLSConfig'] = hls_config
cfg['KerasModel'] = pre_hls_model
cfg['OutputDir'] = 'deeptrack_model_inference/hls4ml_prj/'
cfg['XilinxPart'] = 'xczu5eg-sfvc784-1-e'
hls_model = hls4ml.converters.keras_to_hls(cfg)
hls_model.compile()
Wouldn't this be the same?
Then you'll also need to use the Resource
strategy and increase the ReuseFactor
significantly.
Hi again @vloncar , I now got the following error:
ERROR: [XFORM 203-133] Bitwidth of reshaped elements (73728 bits) exceeds the maximum bitwidth (65536 bits) for array 'w15.V' .
As you suggested I changed the configuration and compilation as follows:
hls_config = hls4ml.utils.config_from_keras_model(
pre_hls_model,
granularity='name',
default_reuse_factor=16
)
hls_config["Model"]["Strategy"] = "Resource"
hls_model = hls4ml.converters.convert_from_keras_model(pre_hls_model,
hls_config=hls_config,
project_name="deeptrack_hls",
output_dir='deeptrack_model_inference/hls4ml_prj/',
part="xczu5eg-sfvc784-1-e",
io_type="io_stream",
)
hls_model.compile()
This one is harder to get around by tweaking the configuration and it will require changes to the model. You'll have to compress your model somewhat (less filters, less neurons), and/or reduce the precision of the weights (though that's best done while training with QKeras). This error is probably from the first dense layer after flattening, but you can check the generated HLS to be sure.
Yes I looked around previous issues and found a similar problem. I changed the default precision to ap_fixed<16,8>
. If it works hurray!, if it doesn't I'll try rebuilding my model using QKeras. It's not a big deal since I'm just starting to play around to get familiar with it.
Hello,
so I thrashed my original idea of using a pre-built model and instead I switched to use a custom model which I then converted with QKeras. The training goes well and it has a nice accuracy. I then try to compile the model with the following settings:
import hls4ml
from hls4ml.model.optimizer.passes.qkeras import OutputRoundingSaturationMode
OutputRoundingSaturationMode.layers = ['Activation']
OutputRoundingSaturationMode.rounding_mode = 'AP_RND'
OutputRoundingSaturationMode.saturation_mode = 'AP_SAT'
config = hls4ml.utils.config_from_keras_model(
qmodel, granularity='name',
default_precision='ap_fixed<16,12>',
default_reuse_factor=8
)
config["Model"]["Strategy"] = "Accuracy"
hls_model = hls4ml.converters.convert_from_keras_model(qmodel,
io_type="io_stream",
hls_config=config,
output_dir='hls_model/hls4ml_prj_pynq',
project_name="SPTModel_hls",
backend='VivadoAccelerator',
board='pynq-z2')
hls_model.compile()
This is the traceback I get afterwards:
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb Cell 21 in <cell line: 21>()
[13](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=12) config["Model"]["Strategy"] = "Accuracy"
[14](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=13) hls_model = hls4ml.converters.convert_from_keras_model(qmodel,
[15](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=14) io_type="io_stream",
[16](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=15) hls_config=config,
(...)
[19](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=18) backend='VivadoAccelerator',
[20](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=19) board='pynq-z2')
---> [21](vscode-notebook-cell://wsl%2Bubuntu-18.04/home/jacopo/workspace/hls4ml/custom_deeptrack_qkeras.ipynb#X26sdnNjb2RlLXJlbW90ZQ%3D%3D?line=20) hls_model.compile()
File ~/.local/lib/python3.8/site-packages/hls4ml/model/hls_model.py:526, in HLSModel.compile(self)
525 def compile(self):
--> 526 self.write()
528 curr_dir = os.getcwd()
529 os.chdir(self.config.get_output_dir())
File ~/.local/lib/python3.8/site-packages/hls4ml/model/hls_model.py:523, in HLSModel.write(self)
519 return ''.join(choice(hexdigits) for m in range(length))
521 self.config.config['Stamp'] = make_stamp()
--> 523 self.config.writer.write_hls(self)
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_accelerator_writer.py:344, in VivadoAcceleratorWriter.write_hls(self, model)
339 """
340 Write the HLS project. Calls the VivadoBackend writer, and extra steps for VivadoAccelerator/AXI interface
341 """
342 self.vivado_accelerator_config = VivadoAcceleratorConfig(model.config, model.get_input_variables(),
343 model.get_output_variables())
--> 344 super(VivadoAcceleratorWriter, self).write_hls(model)
345 self.write_board_script(model)
346 self.write_driver(model)
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_writer.py:682, in VivadoWriter.write_hls(self, model)
680 self.write_build_script(model)
681 self.write_nnet_utils(model)
--> 682 self.write_yml(model)
683 self.write_tar(model)
684 print('Done')
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_writer.py:660, in VivadoWriter.write_yml(self, model)
657 pass
659 with open(model.config.get_output_dir() + '/' + config_filename, 'w') as file:
--> 660 yaml.dump(model.config.config, file)
File ~/.local/lib/python3.8/site-packages/yaml/__init__.py:253, in dump(data, stream, Dumper, **kwds)
248 def dump(data, stream=None, Dumper=Dumper, **kwds):
249 """
250 Serialize a Python object into a YAML stream.
251 If stream is None, return the produced string instead.
252 """
--> 253 return dump_all([data], stream, Dumper=Dumper, **kwds)
File ~/.local/lib/python3.8/site-packages/yaml/__init__.py:241, in dump_all(documents, stream, Dumper, default_style, default_flow_style, canonical, indent, width, allow_unicode, line_break, encoding, explicit_start, explicit_end, version, tags, sort_keys)
239 dumper.open()
240 for data in documents:
--> 241 dumper.represent(data)
242 dumper.close()
243 finally:
File ~/.local/lib/python3.8/site-packages/yaml/representer.py:27, in BaseRepresenter.represent(self, data)
26 def represent(self, data):
---> 27 node = self.represent_data(data)
28 self.serialize(node)
29 self.represented_objects = {}
File ~/.local/lib/python3.8/site-packages/yaml/representer.py:48, in BaseRepresenter.represent_data(self, data)
46 data_types = type(data).__mro__
47 if data_types[0] in self.yaml_representers:
---> 48 node = self.yaml_representers[data_types[0]](self, data)
49 else:
50 for data_type in data_types:
File ~/.local/lib/python3.8/site-packages/yaml/representer.py:207, in SafeRepresenter.represent_dict(self, data)
206 def represent_dict(self, data):
--> 207 return self.represent_mapping('tag:yaml.org,2002:map', data)
File ~/.local/lib/python3.8/site-packages/yaml/representer.py:118, in BaseRepresenter.represent_mapping(self, tag, mapping, flow_style)
116 for item_key, item_value in mapping:
117 node_key = self.represent_data(item_key)
--> 118 node_value = self.represent_data(item_value)
119 if not (isinstance(node_key, ScalarNode) and not node_key.style):
120 best_style = False
File ~/.local/lib/python3.8/site-packages/yaml/representer.py:52, in BaseRepresenter.represent_data(self, data)
50 for data_type in data_types:
51 if data_type in self.yaml_multi_representers:
---> 52 node = self.yaml_multi_representers[data_type](self, data)
53 break
54 else:
File ~/.local/lib/python3.8/site-packages/hls4ml/writer/vivado_writer.py:650, in VivadoWriter.write_yml.<locals>.keras_model_representer(dumper, keras_model)
648 def keras_model_representer(dumper, keras_model):
649 model_path = model.config.get_output_dir() + '/keras_model.h5'
--> 650 keras_model.save(model_path)
...
File h5py/_objects.pyx:55, in h5py._objects.with_phil.wrapper()
File h5py/h5d.pyx:87, in h5py.h5d.create()
ValueError: Unable to create dataset (name already exists)
I'm not exactly sure I understand what's going on. Apparently it has some problems when trying to save the network as an h5 file and this affects the writing of the yaml configuration file.
Hello, I was hoping to receive some feedback on this as I'm still stuck on this problem.
This comes from Tensorflow's model.save()
call, and as the message implies, there's a duplicate name in the dataset (i.e., weights). I would go through the model and check that the layer names are unique. Are you able to save this model before feeding it to hls4ml by calling model.save('mymodel.h5')
?
Hello,
I finally managed after a while to build the model and test it on my PYNQ-ZU. The synthesis went fine, the model is still not accurate when switching from QKeras to the HLS model but this is another problem which I'm investigating.
Apparently what I mention in the previous message was due to the fact that I was training the model 2 times: before and after pruning, hence causing the naming issue. Lesson learned: always train after setting layers to be pruned.
Closing the issue now.