ValueError: Torch var training.2 not found in context | LSTM model
πDescribing the bug
I have a relatively simple LSTM model that I need to jit.script() because I will add if conditional on the input soon, right now I don't have any data-dependent conditionals. Tracing works for now, but I need scripting to work. Scripting itself succeeded but conversion didn't. My model has 2 inputs and 3 outputs (but I don't think that's the problem).
I do NOT have any training variables in my project.
Stack Trace
Pytorch scripting...
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Tuple detected at graph output. This will be flattened in the converted model.
Converting PyTorch Frontend ==> MIL Ops: 10%|βββββββ | 30/315 [00:00<00:00, 1876.65 ops/s]
Traceback (most recent call last):
File "convert_to_coreml.py", line 43, in <module>
debug=True
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py", line 456, in convert
specification_version=specification_version,
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/converter.py", line 190, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/converter.py", line 222, in _mil_convert
**kwargs
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/converter.py", line 282, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/converter.py", line 112, in __call__
return load(*args, **kwargs)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 57, in load
return _perform_torch_convert(converter, debug)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/load.py", line 96, in _perform_torch_convert
prog = converter.convert()
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 270, in convert
convert_nodes(self.context, self.graph)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 103, in convert_nodes
add_op(context, node)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 4136, in noop
inputs = _get_inputs(context, node)
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 200, in _get_inputs
inputs = [context[name] for name in node.inputs]
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/ops.py", line 200, in <listcomp>
inputs = [context[name] for name in node.inputs]
File "/opt/anaconda3/envs/PIP/lib/python3.7/site-packages/coremltools/converters/mil/frontend/torch/converter.py", line 79, in __getitem__
"Torch var {} not found in context {}".format(torch_name, self.name)
ValueError: Torch var training.2 not found in context
To Reproduce
torch_model.eval()
example_input = (torch.rand(10, 72), torch.rand(15), torch.rand(72))
scripted_model = torch.jit.script(torch_model, example_inputs=[example_input])
model = ct.convert(
scripted_model,
source='pytorch',
inputs=[
ct.TensorType(shape=example_input[0].shape),
ct.TensorType(shape=example_input[1].shape),
ct.TensorType(shape=example_input[2].shape),
],
debug=True
)
System environment (please complete the following information):
- coremltools version: 6.1
- OS (e.g. MacOS version or Linux type): 10.15.7 (MacOS Catalina)
- Any other relevant version information (e.g. PyTorch or TensorFlow version): Pytorch 1.12.1
Additional context
- I can share reproducible example upon request I just need to refactor it...
@anuar12 - I am not able to reproduce this problem. I need to know how torch_model is created.
Hello @anuar12, @TobyRoseman I am encountering the same issue while converting TorchScript to CoreML, The model I am converting is fairly complex as I have to call multiple model inside a model's forward method which also have loops and conditionals. Hence I need to convert Pytorch model to TorchScript, while converting to CoreML from TorchScript I encountered the below error :
Stack Trace
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Converting PyTorch Frontend ==> MIL Ops: 8%|βββββββ | 31/384 [00:00<00:00, 10339.01 ops/s]
Converting PyTorch Frontend ==> MIL Ops: 78%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 32/41 [00:00<00:00, 4002.32 ops/s]
Traceback (most recent call last):
File "C:\Users\ADMIN\Desktop\yash\SadTalker\audio2coeff_conversion.py", line 38, in <module>
mlmodel = ct.converters.convert(
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\_converters_entry.py", line 574, in convert
mlmodel = mil_convert(
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\converter.py", line 286, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\converter.py", line 108, in __call__
return load(*args, **kwargs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 80, in load
return _perform_torch_convert(converter, debug)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\load.py", line 99, in _perform_torch_convert
prog = converter.convert()
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py", line 519, in convert
convert_nodes(self.context, self.graph)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 88, in convert_nodes
add_op(context, node)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 3272, in loop
loop = mb.while_loop(
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\mil\ops\registry.py", line 182, in add_op
return cls._add_op(op_cls_to_add, **kwargs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\mil\builder.py", line 183, in _add_op
new_op.build_nested_blocks()
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\mil\ops\defs\iOS15\control_flow.py", line 441, in build_nested_blocks
cond_block, body_block, exit_vars = self._build_block(block_inputs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\mil\ops\defs\iOS15\control_flow.py", line 375, in _build_block
exit_vars = body_func(*body_block.inputs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 3242, in _loop_body
res = convert_block(context, block, inputs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 117, in convert_block
convert_nodes(context, block)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 88, in convert_nodes
add_op(context, node)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 3314, in _if
inputs = _get_inputs(context, node, expected=1)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 223, in _get_inputs
inputs = get_bindings(node.inputs)
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\ops.py", line 209, in get_bindings
results.append(context[i])
File "C:\Users\ADMIN\Desktop\yash\SadTalker\sadtalker-env\lib\site-packages\coremltools\converters\mil\frontend\torch\converter.py", line 251, in __getitem__
raise ValueError(f"Torch var {torch_name} not found in context {self.name}")
ValueError: Torch var training.51 not found in context
We believe that a collaborative effort, with your expertise, would greatly expedite the resolution process. Therefore, we kindly request a call where we can share our repository and screen to demonstrate how the issue manifests. This will allow us to work together efficiently as a team and ensure a prompt resolution.
Your willingness to assist us in overcoming this challenge is deeply appreciated. Once again, thank you for your continued support and dedication to helping us succeed.
Looking forward to hearing from you soon to arrange a suitable time for the call.
We have only "experimental" support for untraced PyTorch models.
Can you give us a minimal example to reproduce this issue?