1826133674

Results 9 issues of 1826133674

我按照readme文件中的指令训练完成后使用script.py文件进行精度验证,发现精度始终为0 Calculated!{"precision": 0.0, "recall": 0.0, "hmean": 0, "AP": 0} 查看网络输出后发现输出的res_img_xx.txt文件内,只有坐标,没有transcription 而计算精度时又需要用到transcription,所以精度一直为0.请问怎么解决这个问题?

### 软件环境 ```Markdown paddle-bfloat 0.1.7 paddle2onnx 1.1.0 paddlefsl 1.1.0 paddlenlp 2.7.0.post0 paddlenlp-ops 0.0.0 paddlepaddle-gpu 2.6.0.post112 ``` ### 重复问题 - [X] I have searched the existing issues ### 错误描述 ```Markdown 使用导出的静态大模型推理时报错如下:...

bug
stale

### 请提出你的问题 在运行llama量化示例的时候发现,输入模型的名字是"model_name_or_path": "./checkpoints/llama_sft_ckpts" 想问一下这个输入是不是就是原版的 llama/meta-llama/Llama-2-7b-chat模型

question

### 请提出你的问题 完成tiny-llama的convert后,我修改了tokenizer.model的名字后,可以成功加载但是,推理不出正确的结果。 我的环境: paddle-bfloat 0.1.7 paddle2onnx 1.1.0 paddlefsl 1.1.0 paddlenlp 2.7.0.post0 paddlepaddle-gpu 2.6.0.post112 转换代码 from paddlenlp.transformers import AutoModelForCausalLM AutoModelForCausalLM.from_pretrained("./TinyLlama-1.1B-Chat-v1.0", convert_from_torch=True, dtype="float16") 推理代码 from paddlenlp.transformers import AutoModelForCausalLM, AutoTokenizer paddle.set_device("gpu:3") print("paddle.device",paddle.get_device())...

question
stale

I am using AIMET for the quantification of Stable-diffusion v1-5 according to the guidance in the following link `https://docs.qualcomm.com/bundle/publicresource/topics/80-64748-1/model_updates.html.` However, during the process of PTQ quantification, I encountered the following...

Hello, I am doing QAT training and follow the tutorial below:https://quic.github.io/aimet-pages/releases/latest/api_docs/torch_quantsim.html#code-example-quantization-aware-training-qat When I reached the last step of exporting the model, I encountered the following error: ValueError: Only ACTIVE QcQuantizeOpMode...

"In the latest version of the code, you changed the type of the parameter 'args.tasks' to a string. There is an issue with the validation and handling of this parameter...

**Describe the bug** 在使用onnxsim优化qwen1.5-1.8B的onnx模型的时候,我们遇到了一个问题,即decoder结构中的q,k,v三个matmul算子会被concatenate成一个算子,然后再使用split分开成三个。此时得到的新的融合版的matmul和split算子都没有名字. **Model** 依赖库版本 onnxsim-0.4.36 torch 2.3.1 稳定复现代码 ``` import torch import torch.nn as nn class MultiMatMulAddModel(nn.Module): def __init__(self): super(MultiMatMulAddModel, self).__init__() # 初始化三个2048x2048的权重矩阵和三个2048维的偏置向量 self.weights = nn.ParameterList([nn.Parameter(torch.randn(2048, 2048)) for...

Hello, may I ask why the latest version of AIMet common has not been released together with other wheel packages? The most recent available AIMet common version is 1.30.0, which...