[BUG] custom-trained models produce different results between cellpose-3 and cellpose-4
Describe the bug A custom-trained model produces widely different results between cellpose-4 (0 cells identified) and cellpose-3 (66 cells identified). I'm not sure if this is expected behaviour with the changes made to cellpose-4? I could not find it in the docs. If it is expected behaviour and if the version with which the model is trained on is stored in the mode file, perhaps a warning message could be outputted when using older custom-trained models in cellpose-4?
To Reproduce I'm running cellpose in conda environments on a M1 chip with macOS 15.5 using a custom model trained with cellpose-2.
For cellpose 3.1.1.2:
cellpose version: 3.1.1.2
platform: darwin
python version: 3.10.17
torch version: 2.7.0
For cellpose 4.0.4:
cellpose version: 4.0.4
platform: darwin
python version: 3.10.17
torch version: 2.7.0
Run log Using cellpose version 3.1.1.2:
2025-06-01 [INFO] ** TORCH MPS version installed and working. **
2025-06-01 [INFO] >>>> using GPU (MPS)
2025-06-01 [INFO] >>>> loading model .../custom_model
2025-06-01 [INFO] >>>> model diam_mean = 30.000 (ROIs rescaled to this size during training)
2025-06-01 [INFO] >>>> model diam_labels = 42.401 (mean diameter of training ROIs)
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0, 'smooth_radius': 0, 'tile_norm_blocksize': 0, 'tile_norm_smooth3D': 1, 'invert': False}
2025-06-01 [INFO] 66 cells found with model in 3.841 sec
Using cellpose version 4.1.4:
2025-06-01 [INFO] ** TORCH MPS version installed and working. **
2025-06-01 [INFO] >>>> using GPU (MPS)
2025-06-01 [INFO] >>>> loading model .../custom_model
{'lowhigh': None, 'percentile': [1.0, 99.0], 'normalize': True, 'norm3D': True, 'sharpen_radius': 0.0, 'smooth_radius': 0.0, 'tile_norm_blocksize': 0.0, 'tile_norm_smooth3D': 0.0, 'invert': False}
2025-06-01 [INFO] 0 cells found with model in 23.261 sec
Screenshots
In cellpose 4, the author update the backend model to Cellpose-SAM, so the old model weight is not compatible to the new one.
@MinTTT is correct that the networks are incompatible. @Pentabyteman You will have to retrain cp4 on your data that you finetuned your cp2/3 models on.
closing due to inactivity