ChatGLM-6B
ChatGLM-6B copied to clipboard
[Help] api调用方式中每次请求都会执行一遍torch_gc方法,是否会影响性能?
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
每次请求都会执行一遍torch_gc方法
Expected Behavior
No response
Steps To Reproduce
async def create_item(request: Request): global model, tokenizer json_post_raw = await request.json() json_post = json.dumps(json_post_raw) json_post_list = json.loads(json_post) prompt = json_post_list.get('prompt') history = json_post_list.get('history') max_length = json_post_list.get('max_length') top_p = json_post_list.get('top_p') temperature = json_post_list.get('temperature') response, history = model.chat(tokenizer, prompt, history=history, max_length=max_length if max_length else 2048, top_p=top_p if top_p else 0.7, temperature=temperature if temperature else 0.95) now = datetime.datetime.now() time = now.strftime("%Y-%m-%d %H:%M:%S") answer = { "response": response, "history": history, "status": 200, "time": time } log = "[" + time + "] " + '", prompt:"' + prompt + '", response:"' + repr(response) + '"' print(log) torch_gc() return answer
Environment
- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA Support (`python -c "import torch; print(torch.cuda.is_available())"`) :
Anything else?
No response
改一下,只有在出现爆显存的时候才回收
liaoweiguo
这个怎么改?怎么让他直到快爆显存的时候改?另外如果不改,对性能影响在哪?