Ankit Pal
Ankit Pal
Thank you for your response! That's a nice idea to include a section on the main README that links to Promptify and similar tools, which would undoubtedly increase their visibility...
I am wondering if there is a solution for this. I am using API provider other than openai and using openai schema for that but getting same error 'No support...
@daniel-furman Awesome, is there a way I can define a custom template? I am using your forked lm_eval repo.
> For me, adding `--use_deepspeed` to the `accelerate` command avoids this error on single GPU. > > ``` > accelerate launch --use_deepspeed -m axolotl.cli.train ... > ``` > > The...
> I am a bit opposed to the auto install being at the root, while it covers the "common" use case, there is a growing variety of setups (ROCm, MPS,...
> Gemma config change and auto setup should be two different PR, so they don't go in together. > > Also, please add some documentation that this autoconfig is only...
It's not working for 70B model :/
> Thanks! > > Can you clarify the commands you are running again, and also what transformers version is being used? I did not see `parallelize=True` in the command you...
Yes, because changing key and domain etc would initiate the prompter, model etc again. for a single model you can initialize the things once and then call the `pipe.fit()` multiple...
> Hi many times while running this model in pycharm the model just freezes and don;t return the output. It works sometimes after restarting the kernel twice or thrice. Sometimes...