Aswanth C Manoj
Aswanth C Manoj
> Check out the Dolphin-llama3 Version that just dropped it fixes many token stop issues for me that were occurring in VScode, they probably fixed other things as well. >...
> > 部署llama3-8b-instruct模型,基于vllm的openai兼容接口。服务拉起来之后,每次请求访问都必须达到最长上下文的输出才会停止。是需要在tokenizer_config.json文件哪里配置停止词吗? 请求访问如下: { "model": "Meta-Llama-3-8B-Instruct", "messages": [ {"role": "user", "content": "Please tell me a 100 words children's story. Please reply in Chinese."} ], "temperature": 0.1 } 接口返回内容如下: {...
> You should better off training Alpaca format standard from LLaMA-3 pretrained weight with new LLaMa-3 bos/eos token and it should work. Thank you will check it out
> what's the difference between llama3-8b and llama3-8b instruct? if i want to deal with the general text generation task, which one is better? llama3-8B is the base model which...
> I want to labelize json's objects which is better for my task the 8b or 8b instruct? Could you please clarify what you mean by "labelize"? Are you trying...
Does vectorizing the Python code make it easier for the LLM to understand the syntax and semantics of the code, and potentially provide better suggestions for error fixes?
@unclecode Thanks for your update, I've checked it out and its working great. But I've identified two main areas for improvement in the code: 1. Currently the code successfully extracts...
@unclecode I'd be happy to help implement these language detection and multi-language code extraction features. To ensure I maintain consistency with the project, could you guide me on like -...