Lian Junhong

Results 39 comments of Lian Junhong
trafficstars

> 是的,非常抱歉,由于新版的fastNLP为了支持更多的框架,因此在架构上做了比较大的调整因此老版和新版的接口不存在简单的映射关系,所以建议如果有一些老版本的程序就通过0.7.1之前的fastNLP进行运行;如果是重新写的代码,我们则建议使用更新的fastNLP接口。通过新版fastNLP,你可以不修改任何代码实现单卡切换为多卡运行;或者修改少量代码在不同框架之间进行迁移。 那请问如果需要使用fastSUM是推荐使用0.6.0的api还是有迁移重构计划?

Try this : https://github.com/tloen/alpaca-lora/issues/14#issuecomment-1471263165

Try this : https://github.com/tloen/alpaca-lora/issues/14#issuecomment-1471263165

> I couldn't reproduce this on my end, but after sleeping on it I think it might have to do with Huggingface Accelerate. Will investigate today. > > What hardware...

> If you are using a V100 this might be of interest: [huggingface/transformers#21955 (comment)](https://github.com/huggingface/transformers/pull/21955#issuecomment-1458250372) tweaking the `llm_int8_threshold` should maybe help Also make sure you are using one of the latest...

Unfortunately, I have forgotten the parameter setting when my problem occurred. Because I tried to take some alternatives, such as modifying `num_ Beams`. I'm sure your solution works because it's...

> @benob You could do something like below > > ```python > def evaluate(instructions, input=None): > prompts = [generate_prompt(instructions) for instruction in instructions] > encodings = tokenizer(prompts, return_tensors="pt", padding=True).to('cuda') >...

> Beyond the system message, the temperature and max tokens are [two of many options](https://platform.openai.com/docs/api-reference/chat) developers have to influence the output of the chat models. For temperature, higher values like...

> > I tried to tune down all of the parameters like MICRO_BATCH_SIZE, BATCH_SIZE , EPOCH, but none seems help. I wonder what else I can do if I want...