OneTrainer icon indicating copy to clipboard operation
OneTrainer copied to clipboard

[Feat]: Implementation of Autoclip

Open DanPli opened this issue 1 year ago • 4 comments

Describe your use-case.

Please consider implementation of AutoClip: Adaptive Gradient Clipping as another option to flat max_grad_norm, you can find the code and more information at https://github.com/pseeth/autoclip

What would you like to see as a solution?

implementation of autoclip as an option, but pizza would be nice too.

Have you considered alternatives? List them here.

no.

DanPli avatar Dec 12 '24 13:12 DanPli

Hello, I'd like to work on the project by fixing this issue. Can you assign this to me? Thanks

Riccardo231 avatar Dec 28 '24 20:12 Riccardo231

@ricor07 done. Most of the discussions about new features happen on the discord server. So if you have any questions, feel free to join. https://discord.com/invite/KwgcQd5scF

Nerogar avatar Dec 28 '24 22:12 Nerogar

FWIW, I hacked a quick implementation of this into my local branch, and it works quite nicely.

# class BaseModelSetup
    def __init__( # ...
        # ...
        self.grad_history = {}

    def autoclip(self, model, clip_percentile=0.1):
        pass

    def _autoclip(self, modules, clip_percentile):
        for name, module in modules.items():
            if name not in self.grad_history:
                self.grad_history[name] = []
            params = [p for p in module.parameters() if p.grad is not None]
            grad_obs = torch.stack([p.grad.data.norm()**2.0 for p in params]).sum().sqrt().item()

            self.grad_history[name].append(grad_obs)
            self.grad_history[name] = self.grad_history[name][-200:]
            clip_value = torch.tensor(self.grad_history[name]).quantile(clip_percentile)
            torch.nn.utils.clip_grad_norm_(params, clip_value)

# class StableDiffusionLoRASetup
    def autoclip(self, model, clip_percentile=0.1):
        modules = {name: module for name, module in model.__dict__.items() if isinstance(module, LoRAModuleWrapper)}
        self._autoclip(modules, clip_percentile)

# GenericTrainer
    # ...
    if self.__is_update_step(train_progress):
        self.model_setup.autoclip(self.model, clip_percentile=0.1)

It'd need to be implemented per setup type, and the percentile would need to be configurable, but I am quite pleased with the results I got from it. The 200-sample clipping was an arbitrary choice, as well, but I don't expect would make too much of a difference overall.

I experimented with tracking the percentiles over the entire model, split by module, split by layer, and even split by parameter. I actually think that splitting by layer produces better results overall, though for loras there are some interesting implications, because each layer has two params, and the up param's grads start at zero, dampening the overall effect. However, per-layer calculations do substantially slow down the training loop, so for my purposes I settled with per-module clipping. It might be interesting to omit observations which are under a certain value (ie, close to zero) to avoid unduly dampening the percentile during the earliest training steps.

cheald avatar Dec 30 '24 18:12 cheald

Hello, I'd like to work on the project by fixing this issue. Can you assign this to me? Thanks

@ricor07 Following up on this if there any updates to this? Otherwise we will have to consider it dropped by you

O-J1 avatar Feb 03 '25 04:02 O-J1