coremltools
coremltools copied to clipboard
Cuda Out of memory with DKMPalettizer
❓Question
I'm trying to run training-time palletization with n_bits=4. I always get an out-of-memory error on the first step (despite reducing batch size down to one.)
A few details:
- I'm quantising the stable diffusion v1.5 model.
- I have 24GB of GPU RAM.
Are there any tips to avoid this?
config = DKMPalettizerConfig(global_config=ModuleDKMPalettizerConfig(n_bits=4))
palettizer = DKMPalettizer(unet, config)
unet = palettizer.prepare(inplace=True)
unet, optimizer, lr_scheduler = accelerator.prepare(unet, optimizer, lr_scheduler)
@pulkital Any thoughts?