Shakiba Shojaei
Shakiba Shojaei
> In `scripts` dir, see `run.py` Sorry i deleted my question right now. Yes thats right but run.py extract patches with the same size 224X224, not the DIV2K dataset with...
> Try change this? `--image_size 256 --step 128` My problem is that my images are 1000X2000, so there are a lot of patches extracted for each image and Collab Pro...
> You can also not process this images , but it will put a lot of pressure on IO data reading. The file name is just a pronoun and does...
> > You can also not process this images , but it will put a lot of pressure on IO data reading. The file name is just a pronoun and...
> `config.py` file, --image_size = GT image size I still have the same error.
> Hello, > > Thank you for your contribution. I am using Pieapp as a loss function with L1 loss. I think the optimal point that we are trying to...
> @shshojaei Q3: Yes, use absolute value to avoid negative scores Q2: Generally yes, but for values close to zero metric isn't very stable / monotonic, so you can't confidently...
> I was able to fix it as follows > > ``` > state_dict = torch.load(weights_path, map_location='cpu') > state_dict['ref_score_subtract.weight'] = state_dict['ref_score_subtract.weight'].view((1,1)) > self.PieAPP_net.load_state_dict(state_dict) > ``` Hi, i have a question...
> sorry for bothering, > > when i run the original code, the size of 'im_pre' and 'im_label' is different, and always 'im_pre' is 6-8 pixels less than the 'im_label'....