PromptEngineer
PromptEngineer
@imjwang can you please check #180. This is going to be the foundation of the new codebase. Do you think it will be possible to combine this PR with that?
@imjwang I tested the PR and I think we will go ahead and merge it. Can you please update the Readme as well. We will have to add this functionality...
@imjwang thanks for adding this. I was going to do it. I will test your PR.
@imjwang @r0103 @Boguslaw-D this has been merged. @imjwang thank you for the PR.
I think since llama2 is probably not going to be used anymore, I will update the prompt template for llama3 as default template.
@ChrisBNECBT I haven't encountered it before. Are you creating new virtual env every time or its the same virtual env?
Should be fixed now. I think there was a bug in one of the recent PR. Somehow missed it.
@N1h1lv5 I hope with this new update, the issue is solved. Can you please confirm?
This will really depend on the hardware you will run this on. Fine-tuning is not going to be too helpful if you are looking for specific information.
Change it to a model that supports 8k or 16k tokens such as zephyr or Yi series. Also you will need to change the max tokens [here](https://github.com/PromtEngineer/localGPT/blob/d4df6d06dfffdb84cad9eea97ee0a0b4ede99ae8/constants.py#L32)