Ma-Dan

Results 23 comments of Ma-Dan

https://github.com/Ma-Dan/QAnything/blob/cpu/%E6%9C%AC%E5%9C%B0CPU%E9%83%A8%E7%BD%B2%E5%92%8C%E8%B0%83%E8%AF%95%E6%96%B9%E6%B3%95.txt 试试我这个方法,milvus和mysql还是docker运行,3个模型和前后端服务都在本地了

我试了一上午就切换回GPU了

> > > > 解决办法是在tokenizer前加上with torch.autocast("cuda"): 如: with torch.autocast("cuda"): features = tokenizer(batch_text, padding=True, return_tensors="pt", truncation=True, max_length = args.max_length) input_ids = features['input_ids'].to("cuda") attention_mask = features['attention_mask'].to("cuda") > > 已在generate.py中修复 除了这3句,还要把model.generate和tokenizer.batch_decode两句也都放在with torch.autocast("cuda"):下。