houliangxue
houliangxue
Is F1 79.6 the highest score you've ever got on Ontonotes? Why I got F1 79.95 without anything changes for your experiments.conf. Did you set a seed in your experiment?
May I know your training time on TPU,and how many TPUs did you use?Thanks
配置:4090 ,24G显存 加载方式: self.model = AutoModelForCausalLM.from_pretrained(model_path,torch_dtype="auto",device_map="auto").eval() self.tokenizer = AutoTokenizer.from_pretrained(model_path) self.generation_Config = GenerationConfig.from_pretrained(model_path, trust_remote_code=True) 当token达到6k时,出现内存溢出 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.14 GiB (GPU 1; 23.65 GiB total capacity;...
when I run: python /main.py --task_name gap --do_eval --do_train --eval_batch_size 10 --data_dir "autodl-tmp/bert-commonsense-master/data/" --bert_model bert-large-uncased --max_seq_length 128 --train_batch_size 16 --alpha_param 20 --beta_param 0.2 --learning_rate 5.0e-6 --num_train_epochs 1.0 --output_dir autodl-tmp/bert-commonsense-master/model_output/ I...
another question :why your c2f+spanbert result (80.2) is better than Joshi 's paper(79.6) . That’s your own implementation for c2f+spanbert?
### Reminder - [x] I have read the above rules and searched the existing issues. ### System Info  ### Reproduction , ### Others _No response_
### 🔎 Search before asking - [x] I have searched the PaddleOCR [Docs](https://paddlepaddle.github.io/PaddleOCR/) and found no similar bug report. - [x] I have searched the PaddleOCR [Issues](https://github.com/PaddlePaddle/PaddleOCR/issues) and found no...
### 🐛 Bug (问题描述) 1、部署方案:docker部署 paddlex genai_server --model_name PaddleOCR-VL-0.9B --backend vllm --port 8080 2 、推理方式 推理方式1:通过OPENAI方式请求 self.client = OpenAI(api_key='xx', base_url="http://192.168.2.115:8080/v1",) messages = [{ "role": "user", "content": [ { "type": "text",...
推理代码如下 if request.stream: print("#################流式生成#################") def generate(): # 调用模型生成音频(注意:需根据语速调整,若模型支持) for chunk in cosyvoice.inference_zero_shot( request.input, # 使用OpenAI的"input"参数 prompt_text, # 提示文本(可根据voice调整音色) prompt_speech_16k, stream=True ): audio_tensor = chunk["tts_speech"] # 根据response_format转换格式(支持mp3/wav) buffer = io.BytesIO() torchaudio.save(...
4090部署cosyvoice3, 非流式rtf 0.1左右,流式0.2左右, 非流式 流式