PaddleX
PaddleX copied to clipboard
model.predict 内部数据处理流程
trafficstars
def demo2():
model_name = "PP-ShiTuV2_rec"
# model_name = "PP-ShiTuV2_rec_CLIP_vit_base"
# model_name = "PP-ShiTuV2_rec_CLIP_vit_large"
model_path = ""
model = create_model(
model_name=model_name,
model_dir="../inference/latest_IR", # 替换为你的模型路径
# 不指定的话似乎是默认使用gpu,会占用显存,计算结果和指定gpu时一样,且与指定cpu时不同。 指定cpu时,计算结果和 paddle inference cpu结果
# 一致. 但是指定gpu时,计算结果和 paddle inference gpu 结果不一致, 原因未知
# device="gpu:0",
device="cpu"
)
import cv2
img_file = "./images/wangzai.jpg"
img = cv2.imread(img_file)[:, :, ::-1]
# while 1:
output = model.predict([img_file] , batch_size=1)
print(1)
for res in output:
res.print()
# res.save_to_json("./fea_ext/output/res.json")
feature1 = res["feature"]
output = model.predict(img)
for res in output:
res.print()
# res.save_to_json("./fea_ext/output/res.json")
feature2 = res["feature"]
import numpy as np
xxx = np.sum(feature1 - feature2)
print(xxx) # 0.959705
xxx 不为0, 将
img = cv2.imread(img_file)
这里修改后,xxx=0, predict 内部的逻辑是怎么样的,读取图片给到模型的时候 是什么通道顺序,内部是什么操作? 我看 inference.yml 的内容是没有将BGR转换成RGB操作。 以下是inference.yml的内容
Global:
model_name: PP-ShiTuV2_rec
Hpi:
backend_configs:
paddle_infer:
trt_dynamic_shapes: &id001
x:
- - 1
- 3
- 224
- 224
- - 1
- 3
- 224
- 224
- - 8
- 3
- 224
- 224
tensorrt:
dynamic_shapes: *id001
PreProcess:
transform_ops:
- ResizeImage:
backend: cv2
interpolation: bilinear
return_numpy: false
size:
- 224
- 224
- NormalizeImage:
channel_num: 3
mean:
- 0.485
- 0.456
- 0.406
order: hwc
scale: 0.00392156862745098
std:
- 0.229
- 0.224
- 0.225
- ToCHWImage: null
PostProcess:
NormalizeFeatures: null
在输入到模型时时RGB格式
The issue has no response for a long time and will be closed. You can reopen or new another issue if are still confused.
From Bot