TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

Conversion of "hf_lora_convert.py" does not account for "lora_alpha"

Open TheCodeWrangler opened this issue 10 months ago • 4 comments

I am seeing degraded performance using lora in my trtllm model and I am suspicious that the "lora_alpha" value in my "adapter_config.json" is not being used when converting weights for the tensorrt_llm inputs

I have been looking through: https://github.com/NVIDIA/TensorRT-LLM/blob/main/examples/hf_lora_convert.py#L76

Should I be applying this alpha weight myself before loading the lora_weights to my trtllm model?

TheCodeWrangler avatar Apr 19 '24 16:04 TheCodeWrangler