[New Feature] finetune local HuggingFace model
In some circumstance, Huggingface is blocked from being downloaded directly, e.g. firewall. Is it feasible to add an option to load local HuggingFace model rather than downloading from hub directly? In "examples/finetune.py", the model is loaded as "model = AutoModel.get_model(model_args)", if it also support local path OR model name, it will be highly useful. Thanks in advance!
Hi, LMFlow and Huggingface Transformers support finetuning local models natively. You can directly replace the model path with your local path. Hope this helps. Thank you!
This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks
thank you!