MNN icon indicating copy to clipboard operation
MNN copied to clipboard

模型调用segmentation fault

Open phoenares opened this issue 4 years ago • 4 comments

你好,我这边模型转换成功没有报错 但是调用的时候,用python API V2版本通过createSession的方式初始化时报 segmentation fault 用python API v3 版本通过MNN.expr调用结果正常 请问这是什么情况呢,模型是pytorch 转onnx再转mnn的,里面有self-attention 谢谢,我可以提供模型和调用代码

phoenares avatar Jan 04 '21 12:01 phoenares

具体代码是?

jxt1234 avatar Jan 06 '21 11:01 jxt1234

现在已经定位到是torch.matmul的问题,加了这个后,原本正常的模型也报segmentation fault了。调用代码如下: def inference_v2(): """ inference mobilenet_v1 using a specific picture """ interpreter = MNN.Interpreter("lstr.mnn") interpreter.setCacheFile('.tempcache') config = {} config['precision'] = 'low' print ('create sess') session = interpreter.createSession() print ('create end') input_tensor = interpreter.getSessionInput(session, 'image') mask_tensor = interpreter.getSessionInput(session, 'mask') print ('create input') image = cv2.imread('cuts_1205_000273.jpg') input_size = (360, 640) height, width = image.shape[0:2]

images = np.zeros((1, 3, input_size[0], input_size[1]), dtype=np.float32)
masks = np.ones((1, 1, input_size[0], input_size[1]), dtype=np.float32)

pad_image     = image.copy()
pad_mask      = np.zeros((height, width, 1), dtype=np.float32)
resized_image = cv2.resize(pad_image, (input_size[1], input_size[0]))
resized_mask  = cv2.resize(pad_mask, (input_size[1], input_size[0]))
masks[0][0]   = resized_mask.squeeze()
resized_image = resized_image / 255.
resized_image -= np.array([0.40789654, 0.44719302, 0.47026115], dtype=np.float32)
resized_image /= np.array([0.28863828, 0.27408164, 0.27809835], dtype=np.float32)
resized_image = resized_image.transpose(2, 0, 1)
images[0]     = resized_image

tmp_input = MNN.Tensor((1, 3, input_size[0], input_size[1]), MNN.Halide_Type_Float,\
                images, MNN.Tensor_DimensionType_Caffe)
tmp_mask = MNN.Tensor((1, 1, input_size[0], input_size[1]), MNN.Halide_Type_Float,\
                masks, MNN.Tensor_DimensionType_Caffe)
input_tensor.copyFrom(tmp_input)
mask_tensor.copyFrom(tmp_mask)
print ('start infer')
interpreter.runSession(session)
output_tensor = interpreter.getSessionOutput(session,'aux_outputs')
curves_tensor = interpreter.getSessionOutput(session,'aux_curves')
outputs = np.array(output_tensor.getData())
curves = np.array(curves_tensor.getData())
outputs = outputs.reshape(-1, 3)
curves = curves.reshape(-1, 8)
result = post_process(outputs, curves)


pred = result
img  = pad_image
img_h, img_w, _ = img.shape
pred = pred[pred[:, 0].astype(int) == 1]
overlay = img.copy()
color = (0, 255, 0)

for i, lane in enumerate(pred):

    lane = lane[1:]  # remove conf
    lower, upper = lane[0], lane[1]
    lane = lane[2:]  # remove upper, lower positions

    # generate points from the polynomial
    ys = np.linspace(lower, upper, num=100)
    points = np.zeros((len(ys), 2), dtype=np.int32)
    points[:, 1] = (ys * img_h).astype(int)
    points[:, 0] = ((lane[0] / (ys - lane[1]) ** 2 + lane[2] / (ys - lane[1]) + lane[3] + lane[4] * ys -
                     lane[5]) * img_w).astype(int)
    points = points[(points[:, 0] > 0) & (points[:, 0] < img_w)]

    # draw lane with a polyline on the overlay
    for current_point, next_point in zip(points[:-1], points[1:]):
        overlay = cv2.line(overlay, tuple(current_point), tuple(next_point), color=color, thickness=5)

    # draw lane ID
    if len(points) > 0:
        cv2.putText(img, str(i)+'_%.2f'%1.0, tuple(points[-1]), fontFace=cv2.FONT_HERSHEY_SIMPLEX, fontScale=1,
                    color=color,
                    thickness=3)
    # Add lanes overlay
w = 0.6
img = ((1. - w) * img + w * overlay).astype(np.uint8)

cv2.imshow('test',img)
cv2.waitKey(0)

phoenares avatar Jan 07 '21 03:01 phoenares

相关模型可以发一下?

jxt1234 avatar Apr 09 '21 02:04 jxt1234

@phoenares @jxt1234 请问大佬们问题解决了吗,创建session的时候也报段错误错seg fault

Tzenthin avatar Jul 21 '22 09:07 Tzenthin

长时间未回复,问题关闭

wangzhaode avatar Feb 15 '23 08:02 wangzhaode