PSPNet-Keras-tensorflow
PSPNet-Keras-tensorflow copied to clipboard
unicode error
this is on windows 10 64 + python 3.6.2 + tensorflow 1.3 + downloaded npy from dropbox
python pspnet.py -m pspnet50_ade20k -i example_images/ade20k.jpg -o example_results/ade20k.jpg Using TensorFlow backend. 2017-10-14 16:39:03.003842: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-10-14 16:39:03.003972: W C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-14 16:39:03.347437: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:955] Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate (GHz) 1.645 pciBusID 0000:01:00.0 Total memory: 8.00GiB Free memory: 6.62GiB 2017-10-14 16:39:03.347582: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:976] DMA: 0 2017-10-14 16:39:03.348951: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:986] 0: Y 2017-10-14 16:39:03.349478: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\36\tensorflow\core\common_runtime\gpu\gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0) Namespace(flip=False, id='0', input_path='example_images/ade20k.jpg', model='pspnet50_ade20k', multi_scale=False, output_path='example_results/ade20k.jpg', sliding=False) No Keras model & weights found, import from npy weights. Building a PSPNet based on ResNet 50 expecting inputs of shape (473, 473) predicting 150 classes PSP module will interpolate to a final feature map size of (60, 60) Importing weights from weights\npy\pspnet50_ade20k.npy Processing input_1 Processing conv1_1_3x3_s2 Processing conv1_1_3x3_s2_bn Processing activation_1 Processing conv1_2_3x3 Processing conv1_2_3x3_bn Processing activation_2 Processing conv1_3_3x3 Processing conv1_3_3x3_bn Processing activation_3 Processing max_pooling2d_1 Processing activation_4 Processing conv2_1_1x1_reduce Processing conv2_1_1x1_reduce_bn Processing activation_5 Processing zero_padding2d_1 Processing conv2_1_3x3 Processing conv2_1_3x3_bn Processing activation_6 Processing conv2_1_1x1_increase Processing conv2_1_1x1_proj Processing conv2_1_1x1_increase_bn Processing conv2_1_1x1_proj_bn Processing add_1 Processing activation_7 Processing conv2_2_1x1_reduce Processing conv2_2_1x1_reduce_bn Processing activation_8 Processing zero_padding2d_2 Processing conv2_2_3x3 Processing conv2_2_3x3_bn Processing activation_9 Processing conv2_2_1x1_increase Processing conv2_2_1x1_increase_bn Processing add_2 Processing activation_10 Processing conv2_3_1x1_reduce Processing conv2_3_1x1_reduce_bn Processing activation_11 Processing zero_padding2d_3 Processing conv2_3_3x3 Processing conv2_3_3x3_bn Processing activation_12 Processing conv2_3_1x1_increase Processing conv2_3_1x1_increase_bn Processing add_3 Processing activation_13 Processing conv3_1_1x1_reduce Processing conv3_1_1x1_reduce_bn Processing activation_14 Processing zero_padding2d_4 Processing conv3_1_3x3 Processing conv3_1_3x3_bn Processing activation_15 Processing conv3_1_1x1_increase Processing conv3_1_1x1_proj Processing conv3_1_1x1_increase_bn Processing conv3_1_1x1_proj_bn Processing add_4 Processing activation_16 Processing conv3_2_1x1_reduce Processing conv3_2_1x1_reduce_bn Processing activation_17 Processing zero_padding2d_5 Processing conv3_2_3x3 Processing conv3_2_3x3_bn Processing activation_18 Processing conv3_2_1x1_increase Processing conv3_2_1x1_increase_bn Processing add_5 Processing activation_19 Processing conv3_3_1x1_reduce Processing conv3_3_1x1_reduce_bn Processing activation_20 Processing zero_padding2d_6 Processing conv3_3_3x3 Processing conv3_3_3x3_bn Processing activation_21 Processing conv3_3_1x1_increase Processing conv3_3_1x1_increase_bn Processing add_6 Processing activation_22 Processing conv3_4_1x1_reduce Processing conv3_4_1x1_reduce_bn Processing activation_23 Processing zero_padding2d_7 Processing conv3_4_3x3 Processing conv3_4_3x3_bn Processing activation_24 Processing conv3_4_1x1_increase Processing conv3_4_1x1_increase_bn Processing add_7 Processing activation_25 Processing conv4_1_1x1_reduce Processing conv4_1_1x1_reduce_bn Processing activation_26 Processing zero_padding2d_8 Processing conv4_1_3x3 Processing conv4_1_3x3_bn Processing activation_27 Processing conv4_1_1x1_increase Processing conv4_1_1x1_proj Processing conv4_1_1x1_increase_bn Processing conv4_1_1x1_proj_bn Processing add_8 Processing activation_28 Processing conv4_2_1x1_reduce Processing conv4_2_1x1_reduce_bn Processing activation_29 Processing zero_padding2d_9 Processing conv4_2_3x3 Processing conv4_2_3x3_bn Processing activation_30 Processing conv4_2_1x1_increase Processing conv4_2_1x1_increase_bn Processing add_9 Processing activation_31 Processing conv4_3_1x1_reduce Processing conv4_3_1x1_reduce_bn Processing activation_32 Processing zero_padding2d_10 Processing conv4_3_3x3 Processing conv4_3_3x3_bn Processing activation_33 Processing conv4_3_1x1_increase Processing conv4_3_1x1_increase_bn Processing add_10 Processing activation_34 Processing conv4_4_1x1_reduce Processing conv4_4_1x1_reduce_bn Processing activation_35 Processing zero_padding2d_11 Processing conv4_4_3x3 Processing conv4_4_3x3_bn Processing activation_36 Processing conv4_4_1x1_increase Processing conv4_4_1x1_increase_bn Processing add_11 Processing activation_37 Processing conv4_5_1x1_reduce Processing conv4_5_1x1_reduce_bn Processing activation_38 Processing zero_padding2d_12 Processing conv4_5_3x3 Processing conv4_5_3x3_bn Processing activation_39 Processing conv4_5_1x1_increase Processing conv4_5_1x1_increase_bn Processing add_12 Processing activation_40 Processing conv4_6_1x1_reduce Processing conv4_6_1x1_reduce_bn Processing activation_41 Processing zero_padding2d_13 Processing conv4_6_3x3 Processing conv4_6_3x3_bn Processing activation_42 Processing conv4_6_1x1_increase Processing conv4_6_1x1_increase_bn Processing add_13 Processing activation_43 Processing conv5_1_1x1_reduce Processing conv5_1_1x1_reduce_bn Processing activation_44 Processing zero_padding2d_14 Processing conv5_1_3x3 Processing conv5_1_3x3_bn Processing activation_45 Processing conv5_1_1x1_increase Processing conv5_1_1x1_proj Processing conv5_1_1x1_increase_bn Processing conv5_1_1x1_proj_bn Processing add_14 Processing activation_46 Processing conv5_2_1x1_reduce Processing conv5_2_1x1_reduce_bn Processing activation_47 Processing zero_padding2d_15 Processing conv5_2_3x3 Processing conv5_2_3x3_bn Processing activation_48 Processing conv5_2_1x1_increase Processing conv5_2_1x1_increase_bn Processing add_15 Processing activation_49 Processing conv5_3_1x1_reduce Processing conv5_3_1x1_reduce_bn Processing activation_50 Processing zero_padding2d_16 Processing conv5_3_3x3 Processing conv5_3_3x3_bn Processing activation_51 Processing conv5_3_1x1_increase Processing conv5_3_1x1_increase_bn Processing add_16 Processing activation_52 Processing average_pooling2d_4 Processing average_pooling2d_3 Processing average_pooling2d_2 Processing average_pooling2d_1 Processing conv5_3_pool6_conv Processing conv5_3_pool3_conv Processing conv5_3_pool2_conv Processing conv5_3_pool1_conv Processing conv5_3_pool6_conv_bn Processing conv5_3_pool3_conv_bn Processing conv5_3_pool2_conv_bn Processing conv5_3_pool1_conv_bn Processing activation_56 Processing activation_55 Processing activation_54 Processing activation_53 Processing lambda_4 Processing lambda_3 Processing lambda_2 Processing lambda_1 Processing concatenate_1 Processing conv5_4 Processing conv5_4_bn Processing activation_57 Processing dropout_1 Processing conv6 Processing lambda_5 Processing activation_58 Set a total of 121 weights Finished importing weights. Writing keras model & weights Traceback (most recent call last): File "pspnet.py", line 265, in
weights=args.model) File "pspnet.py", line 142, in init input_shape=input_shape, weights=weights) File "pspnet.py", line 48, in init self.set_npy_weights(weights) File "pspnet.py", line 129, in set_npy_weights json_string = self.model.to_json() File "C:\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2665, in to_json model_config = self._updated_config() File "C:\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2632, in _updated_config config = self.get_config() File "C:\Anaconda3\lib\site-packages\keras\engine\topology.py", line 2326, in get_config layer_config = layer.get_config() File "C:\Anaconda3\lib\site-packages\keras\layers\core.py", line 659, in get_config function = func_dump(self.function) File "C:\Anaconda3\lib\site-packages\keras\utils\generic_utils.py", line 175, in func_dump code = marshal.dumps(func.code).decode('raw_unicode_escape') UnicodeDecodeError: 'rawunicodeescape' codec can't decode bytes in position 208-209: truncated \UXXXXXXXX escape
lol. probably this: https://github.com/fchollet/keras/issues/4135
Have you got this fixed? I am having a similar issue:
File "C:\Miniconda3\lib\site-packages\keras\utils\generic_utils.py", line 140, in deserialize_keras_object list(custom_objects.items()))) File "C:\Miniconda3\lib\site-packages\keras\layers\core.py", line 699, in from _config function = func_load(config['function'], globs=globs) File "C:\Miniconda3\lib\site-packages\keras\utils\generic_utils.py", line 224, in func_load raw_code = codecs.decode(code.encode('ascii'), 'base64') UnicodeEncodeError: 'ascii' codec can't encode character '\xe3' in position 0: o rdinal not in range(128)
Wonder if you have any suggestions?
@ljm355 I solved this problem by using an older version of keras. My setup is as follows: python 3.4.3 tensorflow-cpu (1.4.1) keras - 2.0.6 (this is what made it work) apparently there is a bug in keras 2.1.2 (the most recent one) where they were unable to deserialize old models. So i just used an older version of keras, it works well.
Thanks! I tried a number of different Keras and Python versions and finally worked it out with the following configuration: Windows 7 python 3.5.0 (conda install python=3.5.0) keras gpu 2.0.8 (conda install -c anaconda keras-gpu) tensorflow-gpu 1.1.0 (conda install tensorflow-gpu)
One caveat about Python version: Python 3.6 will cause 'SystemError: unknown opcode'
@akshaychawla I think there is an easier way to solve this problem ( and I found it's working): In layers_builder.py, call the build_pspnet function as: model = build_pspnet(nb_classes=19, resnet_layers=101, input_shape=(713, 713), activation='softmax') print(model.to_json()) and replace the json string to the json file. This will directly output a json string according to your keras version.
@tangsanli5201 Cool! This is a much better solution as it works for all versions of Keras. Could you dump the json for the latest keras (via pip) and raise a PR? It will help others who are facing this problem.
I am having a similar issue on CentosOS:
$ python3.6 pspnet.py -m pspnet101_cityscapes -i example_images/cityscapes.png -o example_results/cityscapes.jpg
Using TensorFlow backend.
2018-11-11 19:09:55.178295: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Namespace(flip=True, glob_path=None, id='0', input_path='example_images/cityscapes.png', input_size=500, model='pspnet101_cityscapes', output_path='example_results/cityscapes.jpg', weights=None)
Keras model & weights found, loading...
XXX lineno: 18, opcode: 0
Traceback (most recent call last):
File "pspnet.py", line 188, in
I'm using tensorflow 1.12, keras 2.0.6, and python 3.6
I'm using scipy==1.0.0,tensorflow-gpu==1.13.1,keras==2.0.6,opencv-python==4.1.0.25,python=3.5.2. it works!