Alexandr Kalinin
Alexandr Kalinin
Try increasing the number of images ([default is 40](https://github.com/alxndrkalinin/pytorch_fnet/blob/master/examples/download_and_train.py#L16)) and the number of iterations ([default is 50000](https://github.com/alxndrkalinin/pytorch_fnet/blob/master/examples/download_and_train.py#L17)).
You can also try [ad_customization branch](https://github.com/AllenCellModeling/pytorch_fnet/tree/ad_customization) that added TTA and changed the prediction tile size. Maybe using [non-zero overlap](https://github.com/AllenCellModeling/pytorch_fnet/blob/64c53d123df644cebe5e4f7f2ab6efc5c0732f4e/fnet/predict_piecewise.py#L70) between [prediction tiles](https://github.com/AllenCellModeling/pytorch_fnet/blob/64c53d123df644cebe5e4f7f2ab6efc5c0732f4e/fnet/cli/predict.py#L301) can help. Finally, I recommend checking out [Distill...
[Try downgrading Python to 3.11?](https://stackoverflow.com/a/77364602)
Check that your image sizes in X and Y are divisible by 16. You can crop or pad if they're not. Basic solution: add following code before `return` statement in...
What are the sizes of your images at training and at prediction? Fnet also [requires the minimum of 32 slices in Z](https://github.com/AllenCellModeling/pytorch_fnet/issues/105).
Seems like there might be [a requirement for the minimum number of z slices](https://github.com/AllenCellModeling/pytorch_fnet/issues/105)