hls4ml-tutorial icon indicating copy to clipboard operation
hls4ml-tutorial copied to clipboard

Pytorch converter doesn't work.

Open areeb-agha opened this issue 2 years ago • 7 comments

I trained VGG16 model on CIFAR100 dataset on pytorch. When I run:

import hls4ml
import plotting

config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer')
print("-----------------------------------")
print("Configuration")
plotting.print_dict(config)
print("-----------------------------------")
hls_model = hls4ml.converters.convert_from_pytorch_model(
    model, hls_config=config, output_dir='model_3/hls4ml_prj', part='xcu250-figd2104-2L-e'
)

I get the error on the last line: TypeError: cannot unpack non-iterable NoneType object

While I ran pre-trained VGG16 model of keras on hls4ml, it runs smoothly without any error. The cause of the error I found out is the config file generated from config = hls4ml.utils.config_from_pytorch_model(model, granularity='layer'). When I print this variable config, it shows: {'Model': {'Precision': 'ap_fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency'}} which shows there is no information regarding the layers. In case of Keras i.e. config = hls4ml.utils.config_from_keras_model(model, granularity='layer') generates following output:


Interpreting Model
Topology:
Layer name: input_1, layer type: InputLayer, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 3]
Layer name: block1_conv1, layer type: Conv2D, input shapes: [[None, 224, 224, 3]], output shape: [None, 224, 224, 64]
Layer name: block1_conv2, layer type: Conv2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 224, 224, 64]
Layer name: block1_pool, layer type: MaxPooling2D, input shapes: [[None, 224, 224, 64]], output shape: [None, 112, 112, 64]
Layer name: block2_conv1, layer type: Conv2D, input shapes: [[None, 112, 112, 64]], output shape: [None, 112, 112, 128]
Layer name: block2_conv2, layer type: Conv2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 112, 112, 128]
Layer name: block2_pool, layer type: MaxPooling2D, input shapes: [[None, 112, 112, 128]], output shape: [None, 56, 56, 128]
Layer name: block3_conv1, layer type: Conv2D, input shapes: [[None, 56, 56, 128]], output shape: [None, 56, 56, 256]
Layer name: block3_conv2, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_conv3, layer type: Conv2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 56, 56, 256]
Layer name: block3_pool, layer type: MaxPooling2D, input shapes: [[None, 56, 56, 256]], output shape: [None, 28, 28, 256]
Layer name: block4_conv1, layer type: Conv2D, input shapes: [[None, 28, 28, 256]], output shape: [None, 28, 28, 512]
Layer name: block4_conv2, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_conv3, layer type: Conv2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 28, 28, 512]
Layer name: block4_pool, layer type: MaxPooling2D, input shapes: [[None, 28, 28, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv1, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv2, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_conv3, layer type: Conv2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 14, 14, 512]
Layer name: block5_pool, layer type: MaxPooling2D, input shapes: [[None, 14, 14, 512]], output shape: [None, 7, 7, 512]
Layer name: flatten, layer type: Reshape, input shapes: [[None, 7, 7, 512]], output shape: [None, 25088]
Layer name: fc1, layer type: Dense, input shapes: [[None, 25088]], output shape: [None, 4096]
Layer name: fc2, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 4096]
Layer name: predictions, layer type: Dense, input shapes: [[None, 4096]], output shape: [None, 1000]
{'Model': {'Precision': 'fixed<16,6>', 'ReuseFactor': 1, 'Strategy': 'Latency', 'BramFactor': 1000000000, 'TraceOutput': False}}

Please resolve this issue.

areeb-agha avatar Jun 19 '23 14:06 areeb-agha

hello,I also encountered this problem. have you solve this issue?

zyt1024 avatar Jul 22 '23 13:07 zyt1024

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

areeb-agha avatar Jul 22 '23 15:07 areeb-agha

No, I am still waiting for their reply. It seems their Pytorch converter has some bug. I temporarily switched to keras, which works fine.

Hi,did you use Vitis HLS or the Vivado HLS? Please reply.

poulamiM25 avatar Aug 27 '23 18:08 poulamiM25

I used Vitis HLS

areeb-agha avatar Aug 28 '23 09:08 areeb-agha

Can you please help me how you run the code on Vitis HLS. The github repo is not working for Vitis hls

poulamiM25 avatar Aug 28 '23 11:08 poulamiM25

Are you using Pytorch or Keras?

areeb-agha avatar Aug 28 '23 12:08 areeb-agha

I am using Keras only

poulamiM25 avatar Aug 28 '23 12:08 poulamiM25