PaddleSeg
PaddleSeg copied to clipboard
Model performance significantly worse between predict.py and infer.py
Hey, I've trained my modnet matting model using PaddleSeg, and I tried using both predict.py and infer.py to perform inference. predict.py gives accurate, very clean outputs. I want to deploy the model so I ran export.py with the config and created an inference model. Now, when I use infer.py to get inference outputs of my model, the predicted images look very poor. The val_transforms have been copied over directly from the original config in my deploy.yaml config. Its passing the same transforms to both predict.py and infer.py. What is causing this?
Oriignal Prediction (predict.py):
Infer.py prediction:
When you export the model, it will generate a deploy.yaml which can be used in infer.py without any procession.
@wuyefeilin Yeah I'm aware of that, I used deploy.yaml as my config for infer.py. My problem is there's a significant difference in prediction quality as shown above between using predict.py and infer.py. Can you help me understand why that's happening?
Do you change the code? If yes, change the code detailedly. If not, can you give me the model before and after export.
No I did not change the code anywhere.
Folder contains two zip files : Before exporting and after exporting, with their config files.
Hi @wuyefeilin any idea why this is happening?
Give me your exporting command
python export.py --config configs/modnet/modnet-hrnet_w18-iter19_1.yml --save_dir deploy/models/iter19_1 --model_path train_runs/iter19_1/iter_60000/model.pdparams
This is my config
batch_size: 8
iters: 100000
train_dataset:
type: MattingDataset
dataset_root: <my_dataset>
train_file: train.txt
transforms:
- type: LoadImages
- type: Resize
target_size: [512, 512]
- type: RandomDistort
- type: RandomBlur
- type: RandomNoise
- type: RandomSharpen
- type: RandomHorizontalFlip
- type: Normalize
mode: train
val_dataset:
type: MattingDataset
dataset_root: <my_dataset>
val_file: val.txt
transforms:
- type: LoadImages
- type: ResizeByShort
short_size: 512
- type: ResizeToIntMult
mult_int: 32
- type: Normalize
mode: val
get_trimap: False
model:
type: MODNet
backbone:
type: HRNet_W18
pretrained: https://bj.bcebos.com/paddleseg/dygraph/hrnet_w18_ssld.tar.gz
pretrained: Null
optimizer:
type: sgd
momentum: 0.9
weight_decay: 4.0e-5
lr_scheduler:
type: PiecewiseDecay
boundaries: [40000, 80000]
values: [0.01, 0.001, 0.0001]
Could it be because in train_transforms its just Resize
whereas in val its ResizeByShort
?
make sure the model in yaml and the model.pdparams is consistant when exporting.
Does it exists the UserWarning when you exporting
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.