TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

Fixed rslora scaling in lora_manager

Open TheCodeWrangler opened this issue 9 months ago • 1 comments

Addressing issue mentioned in https://github.com/NVIDIA/TensorRT-LLM/issues/1668

When weights were trained using rslora scaling they should be scaled differently. Code initially was always normalizing by rank regardless of "use_rslora" flag in huggingface adapter_config.json file.

Scaling has also been updated in examples/hf_lora_convert.py

TheCodeWrangler avatar May 24 '24 19:05 TheCodeWrangler