Luke
Luke
I am having the same issue with LLMChain. It seems like it is not running the parser function at all.
Thank you everyone! This is very much appreciated
@WeichenXu123 The PEFT library saves 2 files: - A JSON config file that stores all the adapters settings, including which base model to pull down. ``` { "auto_mapping": null, "base_model_name_or_path":...
Is there a way that it just saves the python function and the adapter artifacts?
Thank you! @WeichenXu123
@WeichenXu123 The PEFT model does not inherit from the transformers, it's more of a wrapper. This causes issues as it cannot be added to a Huggingface pipeline and throws errors...
@WeichenXu123 We ended up leveraging PyFunc Models as we we couldn't find a pattern that was generic enough to cover most base models. Thank you for continuing the thread though.