PConv-Keras icon indicating copy to clipboard operation
PConv-Keras copied to clipboard

Where to download the trained model 'weights.26-1.07.h5'

Open YingqianWang opened this issue 6 years ago • 10 comments

In Notebook 5, the model is loaded by using: model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)

But where can I download the trained model? I only want to run the test and do not want to perform training.

YingqianWang avatar Apr 19 '19 01:04 YingqianWang

In Notebook 5, the model is loaded by using: model.load(r"C:\Users\Mathias Felix Gruber\Documents\GitHub\PConv-Keras\data\logs\imagenet_phase2\weights.26-1.07.h5", train_bn=False)

But where can I download the trained model? I only want to run the test and do not want to perform training.

have u got the pre-trained model or weights ? i use the pconv_imagenet model to test , but i got a bad prediction.

ttxxr avatar Apr 19 '19 13:04 ttxxr

You can find the link to download weights in readme. `Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`

ghost avatar Apr 21 '19 15:04 ghost

You can find the link to download weights in readme. `Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

ttxxr avatar Apr 23 '19 03:04 ttxxr

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

leeqiaogithub avatar Dec 20 '19 12:12 leeqiaogithub

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

The possible cause could be the TF version. What version do you currently use?

ykeremy avatar May 16 '20 12:05 ykeremy

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

hello,when i tried to test with the pre-trained weights(pconv_imagenet.h5),i got an error at this function: model=PConvUnet() model.load('./data/model/pconv_imagenet.h5')

ValueError:Layer #0 (named "p_conv2d_17" in the current model) was found to correspond to layer p_conv2d_49 in the save file. However the new layer p_conv2d_17 expects 3 weights, but the saved weights have 2 elements. it sames like the pre-trained model and the PConvUnet() have different structures.but i am not sure.can you help me to figure it out?thanks.

Based on @mrkeremyilmaz reply, I tried pip install -r requirements.txt with all versions specified by the author. And this problem no longer exists.

Kirstihly avatar Jan 09 '21 22:01 Kirstihly

You can find the link to download weights in readme. `Pre-trained weights

I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch.

Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]`

Hello I have tried to train model with(pconv_imagenet.h5) but got this error: ValueError Traceback (most recent call last) in () 1 # Instantiate the model 2 model = PConvUnet(vgg_weights='./data/logs/pytorch_vgg16.h5') ----> 3 model.load("/content/gdrive/MyDrive/Partial_Conv/pconv_imagenet.h5", train_bn=False)

in load(self, filepath, train_bn, lr) 238 239 # Load weights into model --> 240 epoch = int(os.path.basename(filepath).split('.')[1].split('-')[0]) 241 assert epoch > 0, "Could not parse weight file. Should include the epoch" 242 self.current_epoch = epoch

ValueError: invalid literal for int() with base 10: 'h5'.

is there any way to correct this error?

sariva03 avatar Mar 23 '21 10:03 sariva03

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Hello I have tried to train model with(pconv_imagenet.h5) but got this error: ValueError Traceback (most recent call last) in () 1 # Instantiate the model 2 model = PConvUnet(vgg_weights='./data/logs/pytorch_vgg16.h5') ----> 3 model.load("/content/gdrive/MyDrive/Partial_Conv/pconv_imagenet.h5", train_bn=False)

in load(self, filepath, train_bn, lr) 238 239 # Load weights into model --> 240 epoch = int(os.path.basename(filepath).split('.')[1].split('-')[0]) 241 assert epoch > 0, "Could not parse weight file. Should include the epoch" 242 self.current_epoch = epoch

ValueError: invalid literal for int() with base 10: 'h5'.

is there any way to correct this error?

It seems that the file name of the saved weights must be of the form "<N_of_epoch>-.h5", such as "weights.26-1.07.h5".

marza1993 avatar Jun 21 '21 16:06 marza1993

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

My situation is the same as yours. Have you found a solution?

theWaySoFar-arch avatar Feb 12 '23 12:02 theWaySoFar-arch

You can find the link to download weights in readme. **_**Pre-trained weights**_** I've ported the VGG16 weights from PyTorch to keras; this means the 1/255. pixel scaling can be used for the VGG16 network similarly to PyTorch. Ported VGG 16 weights PConv on Imagenet PConv on Places2 [needs training] PConv on CelebaHQ [needs training]

Thanks, i have tried to test with the pre-trained weights , but i got the bad prediction . Is there any specific requirement for the input ?

My situation is the same as yours. Have you found a solution?

Hello, i have the same problem. did u find a solution? thanks

hariouat avatar Aug 18 '23 08:08 hariouat