santacoder-finetuning
santacoder-finetuning copied to clipboard
Issue running inference on Huggingface after model upload
Followed the instructions to create a new model repo and add the required files via Git. When I test the uploaded model via the HF sandbox, I get the following error:
Loading umm-maybe/StackStar_Santa requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True
to remove this error.
It's unclear which configuration file it's referring to, but I did notice the config.json references the parent model (santacoder), instead of mine, and changed that. I also executed the configuration_gpt2_mq.py, which does nothing. There's no trust_remote_code option in either of these files; from what I understand it's an option when running local inference using AutoModelForCausalLM.from_pretrained. It's not clear how to set this option for on-line inference via the HuggingFace Hub.