stanford_alpaca
stanford_alpaca copied to clipboard
Any plans for using GPT-4 for self-instruct? Or using larger llama models?
I'm just curious
Hi, I have tried converting the self-instruction code and prompts runnable with ChatCompletion API (support turbo and gpt4), and does not affect the original Davinci generation. It looks work fine. I'm happy to open a pull request if authors also looking into this @Tiiiger
嗨,我尝试将可使用 ChatCompletion API(支持 turbo 和 gpt4)运行的自指令代码和提示转换为自指令代码和提示,并且不会影响原始的 Davinci 生成。它看起来工作正常。如果作者也在研究这个问题,我很高兴打开一个拉取请求
could you tell me how to use gpt4o for self-instruct?