Feeding 256x256 images as arrays for high GPU utilization
Feeding a list of 256x256 grayscale images to .eval ignores the batch_size and defaults to a specific number. I am unable to increase GPU memory utilization by increasing batch_size at the moment. Currently I am attempting to feed my images as an array (512 x 3 x 256 x 256) but it forces me to specify channel_axis(which I set to be 1 in this case), z_axis (I don't have a z axis since these aren't z-stacks or confocal slices, regardless I set it to be 1) and I set do_3D=True. This seems to be taking even longer than feeding a list of images. Is these a better way to feed same sized images as an array (not from z-stacks and confocal) to CellposeSAM and increase GPU utilization?
Hi @sraut-scellbio We will implement this shortly. It was previously implemented in CP3, but lost in the transition to CP4.
Thanks
Hi, When will the new release with improved GPU utilization be available?