sd-scripts
sd-scripts copied to clipboard
add Optimi(more fused_background_optimizer and new function)
https://optimi.benjaminwarner.dev/
New optimizer: adamw、lion、ranger and stableadamw from Optimi
New function:
Low Precision Training with Kahan Summation(auto use when use above optimizers)
Gradient Release(same as fused background pass) Auto use when use above optimizers.
Fully Decoupled Weight Decay(looks like decoupled lr)
Optimizer Accumulation Gradient accumulation reduces training memory by splitting a batch into micro-batches and accumulating micro-batch gradients into the larger batch. Gradient release reduces training memory by limiting gradients to one layer at any given time. Optimizer accumulation unifies these two disparate approaches by accumulating gradients directly into optimizer states while performing gradient release.