DeepSolo icon indicating copy to clipboard operation
DeepSolo copied to clipboard

how to export onnx?

Open garspace opened this issue 1 year ago • 20 comments

thanks for your work!

garspace avatar Aug 08 '23 09:08 garspace

have you solved this problem? i met some problem when i try to export onnx

Zalways avatar Aug 21 '23 00:08 Zalways

I had to take the axe to the code to get ONNX export to work. You can use the results here: https://github.com/agoryuno/deepsolo-onnx

agoryuno avatar Sep 21 '23 08:09 agoryuno

@agoryuno Thanks for providing the ONNX export notebook. During onnx inference, I got these output nodes with shapes. Can you please guide me to which output belongs to what. I am interested in obtaining the bbox for text detected in the image

image

gigasurgeon avatar Oct 16 '23 00:10 gigasurgeon

thanks for your work!

could you provide the version of your torch and some packages?

YuMJie avatar Oct 31 '23 03:10 YuMJie

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

Gavinic avatar Nov 01 '23 12:11 Gavinic

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

YuMJie avatar Nov 02 '23 00:11 YuMJie

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image

the according output of onnx model is: image

tanks!

Gavinic avatar Nov 03 '23 02:11 Gavinic

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image

the according output of onnx model is: image

tanks!

hah,I also meet this issue. It may be related to the size of input image. I am debugging now and will reply to you as soon as I have it fixed.

YuMJie avatar Nov 03 '23 02:11 YuMJie

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image the according output of onnx model is: image tanks!

hah,I also meet this issue. It may be related to the size of input image. I am debugging now and will reply to you as soon as I have it fixed.

Thank you very much! 👍🏻

Gavinic avatar Nov 03 '23 03:11 Gavinic

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

YuMJie avatar Nov 23 '23 06:11 YuMJie

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

Zalways avatar Dec 01 '23 06:12 Zalways

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

Zalways avatar Dec 01 '23 06:12 Zalways

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

Have you used the dynamic_axes on onnx ?

YuMJie avatar Dec 01 '23 06:12 YuMJie

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

when i export onnx model, i meet some wrong, can i add your contact information to ask more details?

jasper-cell avatar Dec 26 '23 01:12 jasper-cell

@Gavinic @agoryuno @YuMJie Can this ONNX-exported model support multi-scale image input? Or only support fixed-size images?

i exported the model by tracing,but it only support fixed-size images

Have you used the dynamic_axes on onnx ?

import shutil from DeepSolo.onnx_model import SimpleONNXReadyModel import numpy as np

CHECKPOINT = "rects_res50_finetune.pth" # If you use other pth, pls change the CONFIG OUTPATH = "deepsolo2.onnx"

DIMS = (480,480) CONFIG = "configs/Base_Rects_export.yaml" CHANNELS = 3 model = SimpleONNXReadyModel(CONFIG, CHECKPOINT) img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.uint8) import torch.onnx import torch img_t = torch.from_numpy(img) input_names = ["image"] output_names = ['ctrl_point_cls', 'ctrl_point_coord', 'ctrl_point_text', 'bd_points'] torch.onnx.export(model.model, [img_t], OUTPATH, input_names = input_names, output_names = output_names, dynamic_axes={'image':[1,2]}, export_params=True)

i have exported onnx,but when i run inference, it failed. image

Zalways avatar Dec 27 '23 03:12 Zalways

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@agoryuno @Gavinic the result inferenced by exported onnx model seems different from pth file, and i cann't use the exported onnx model to get the final recgnized text,have you solved this problem? and how is your result? the inference result does really work?

Zalways avatar Dec 28 '23 07:12 Zalways

can you tell me your environment for onnx export. I try to export my pth file, but failed. i want to check the relevant export environment | | lingtaner | | @.*** | ---- Replied Message ---- | From | @.> | | Date | 12/28/2023 15:23 | | To | @.> | | Cc | @.> , @.> | | Subject | Re: [ViTAE-Transformer/DeepSolo] how to export onnx? (Issue #37) |

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@@.*** the result inferenced by exported onnx model seems different from pth file, and i cann't use the exported onnx model to get the final recgnized text,have you solved this problem? and how is your result? the inference result does really work?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

jasper-cell avatar Dec 28 '23 08:12 jasper-cell

@Gavinic Sorry, Ican not find why the results are different. I try the same input, however, the outputs of the backbone are different, which is strange. Have you found the bug?

@agoryuno @Gavinic the result inferenced by exported onnx model seems different from pth file, and i cann't use the exported onnx model to get the final recgnized text,have you solved this problem? and how is your result? the inference result does really work?

have you know how to use the onnx model's inference result, i also found this result is different.

jasper-cell avatar Jan 19 '24 01:01 jasper-cell

an you tell me your environment for onnx export

can you tell me your environment for onnx export?

shining-love avatar Feb 19 '24 06:02 shining-love

have you solved this problem? i met some problem when i try to export onnx

i also try to use the notebook to convert onnx model, but i got the unsupported value type 'Instance', is there any suggestion? thank you

import shutil
from DeepSolo.onnx_model import SimpleONNXReadyModel
import numpy as np

CHECKPOINT = "vitaev2-s_pretrain_synth-tt-mlt-13-15-textocr.pth" # If you use other pth, pls change the CONFIG 
OUTPATH = "deepsolo.onnx"  

DIMS = (960,960)
CONFIG = "configs/Base_det_export.yaml"   #Here
CHANNELS = 3
model = SimpleONNXReadyModel(CONFIG, CHECKPOINT)
img = np.random.randint(0, 255, (CHANNELS, *DIMS))

img = img.astype(np.int8)
import torch.onnx
import torch
img_t = torch.from_numpy(img)
torch.onnx.export(model.model,
           [img_t],
           OUTPATH,
           export_params=True)

My code for exporting onnx model is above. You can try and pay attention to your torch vision and python>=3.9.

torch                    2.0.0+cu118
torchaudio               2.0.1+cu118
torchvision              0.15.1+cu118

Thank you! i have exported the onnx model successfully with your guidance. but when i check the output, i found the output of onnx is different from the pth model , the used image is same. For example: pth model output of 'ctrl_point_cls': image

the according output of onnx model is: image

tanks! @Gavinic Hey, I'm experiencing the same issue. Do you have any insights on this? I followed your instructions exactly.

stevenLuzhengti avatar May 02 '24 11:05 stevenLuzhengti