unet
unet copied to clipboard
Changing target image size
Hi, Thanks a lot for the code! This is really helpful. I want the segmented image to be the same size as the input image. I am wondering if you can point me to the locations where I have to change the target size. I have tried changing it in data.py but always end up with some issues. Thanks, Aashrith
Hi, Thanks a lot for the code! This is really helpful. I want the segmented image to be the same size as the input image. I am wondering if you can point me to the locations where I have to change the target size. I have tried changing it in data.py but always end up with some issues. Thanks, Aashrith
I have implemented this function. : https://github.com/wuyang0329/unet/blob/master/data.py
Hi, Thanks a lot for the code! This is really helpful. I want the segmented image to be the same size as the input image. I am wondering if you can point me to the locations where I have to change the target size. I have tried changing it in data.py but always end up with some issues. Thanks, Aashrith
You have pass it through parameters at the following lines: myGen = trainGenerator([here comes the params], target_size = (width,height)) model = unet(input_size = (width,height,channelnumber)) testGene = testGenerator([here comes the params], target_size = (width,height))
btw, im not sure, if width and height is in good order, but I use the same size for both (e.g. 256,256 or 512,512) and works fine
Hello, @iambackit Did you manage to get the size of the predictions right ? I have the same problem ! Thank you for your help :pray:
@jeromen7 Because you train your network for that given size (e.g. 256x256 or 512x512), it will predict for that, so all you have to do is resize the image after the prediction.