fonnx icon indicating copy to clipboard operation
fonnx copied to clipboard

How to use HFONNX tools

Open kendiyang opened this issue 9 months ago • 0 comments

from txtai.pipeline import HFOnnx

模型路径

path = "openai/whisper-small"

将模型导出为 ONNX

onnx = HFOnnx() model = onnx(path, "default", "whisper-small.onnx", True)

Traceback (most recent call last): File "/root/test.py", line 8, in model = onnx(path, "default", "whisper-small.onnx", True) File "/usr/local/lib/python3.10/dist-packages/txtai/pipeline/train/hfonnx.py", line 63, in call export( File "/usr/local/lib/python3.10/dist-packages/torch/onnx/init.py", line 375, in export export( File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 502, in export _export( File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1564, in _export graph, params_dict, torch_out = _model_to_graph( File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 997, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py", line 904, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py", line 1500, in _get_trace_graph outs = ONNXTracedModule( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py", line 139, in forward graph, out = torch._C._create_graph_by_tracing( File "/usr/local/lib/python3.10/dist-packages/torch/jit/_trace.py", line 130, in wrapper outs.append(self.inner(*trace_inputs)) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1617, in forward encoder_outputs = self.encoder( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1747, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1726, in _slow_forward result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/whisper/modeling_whisper.py", line 1016, in forward if input_features.shape[-1] != expected_seq_length: AttributeError: 'NoneType' object has no attribute 'shape'

kendiyang avatar Jan 12 '25 11:01 kendiyang