coremltools icon indicating copy to clipboard operation
coremltools copied to clipboard

Python 3.10 Support

Open zsong opened this issue 2 years ago • 9 comments

Keras 2.8.0 Tensorflow 2.8.0 coremltools 5.2.0 python 3.10.4 Ubuntu 22.04

Errors

2022-05-05 15:09:04.213669: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.213796: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.213850: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.214045: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214141: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214227: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214363: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214445: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.214500: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.215812: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.003ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2022-05-05 15:09:04.253705: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.253780: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.253824: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.253982: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254070: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254149: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254262: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254344: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.254399: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.261510: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 2.991ms.
  dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.306ms.
  debug_stripper: debug_stripper did nothing. time = 0.005ms.
  constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.042ms.
  dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.229ms.
  debug_stripper: debug_stripper did nothing. time = 0.004ms.

2022-05-05 15:09:04.283289: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283357: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.283393: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.283593: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283681: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283759: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283864: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283944: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.283998: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.285078: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  function_optimizer: function_optimizer did nothing. time = 0.003ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2022-05-05 15:09:04.318159: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318237: I tensorflow/core/grappler/devices.cc:66] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 1
2022-05-05 15:09:04.318283: I tensorflow/core/grappler/clusters/single_machine.cc:358] Starting new session
2022-05-05 15:09:04.318470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318559: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318637: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318745: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318827: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.318882: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
2022-05-05 15:09:04.328088: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1164] Optimization results for grappler item: graph_to_optimize
  constant_folding: Graph size after: 42 nodes (-14), 55 edges (-14), time = 4.112ms.
  dependency_optimizer: Graph size after: 41 nodes (-1), 40 edges (-15), time = 0.282ms.
  debug_stripper: debug_stripper did nothing. time = 0.004ms.
  constant_folding: Graph size after: 41 nodes (0), 40 edges (0), time = 1.075ms.
  dependency_optimizer: Graph size after: 41 nodes (0), 40 edges (0), time = 0.227ms.
  debug_stripper: debug_stripper did nothing. time = 0.003ms.

Running TensorFlow Graph Passes:   0%|                                                 | 0/6 [00:00<?, ? passes/s]2022-05-05 15:09:04.587585: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.587766: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.587860: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588042: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588165: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-05-05 15:09:04.588249: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 7269 MB memory:  -> device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:01:00.0, compute capability: 8.6
Running TensorFlow Graph Passes: 100%|█████████████████████████████████████████| 6/6 [00:00<00:00, 51.88 passes/s]
Converting Frontend ==> MIL Ops: 100%|████████████████████████████████████████| 41/41 [00:00<00:00, 2051.40 ops/s]
Running MIL Common passes: 100%|███████████████████████████████████████████| 34/34 [00:00<00:00, 1104.92 passes/s]
Running MIL Clean up passes: 100%|████████████████████████████████████████████| 9/9 [00:00<00:00, 682.35 passes/s]
Translating MIL ==> NeuralNetwork Ops: 100%|██████████████████████████████████| 67/67 [00:00<00:00, 1622.07 ops/s]
---------------------------------------------------------------------------
Exception                                 Traceback (most recent call last)
Input In [34], in <cell line: 2>()
      1 import coremltools as ct
----> 2 coreml_model = ct.converters.convert(model)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/_converters_entry.py:352, in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, useCPUOnly, package_dir, debug)
    349     if ext != _MLPACKAGE_EXTENSION:
    350         raise Exception("If package_dir is provided, it must have extension {} (not {})".format(_MLPACKAGE_EXTENSION, ext))
--> 352 mlmodel = mil_convert(
    353     model,
    354     convert_from=exact_source,
    355     convert_to=exact_target,
    356     inputs=inputs,
    357     outputs=outputs,
    358     classifier_config=classifier_config,
    359     transforms=tuple(transforms),
    360     skip_model_load=skip_model_load,
    361     compute_units=compute_units,
    362     package_dir=package_dir,
    363     debug=debug,
    364 )
    366 if exact_target == 'milinternal':
    367     return mlmodel # Returns the MIL program

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:183, in mil_convert(model, convert_from, convert_to, compute_units, **kwargs)
    144 @_profile
    145 def mil_convert(
    146     model,
   (...)
    150     **kwargs
    151 ):
    152     """
    153     Convert model from a specified frontend `convert_from` to a specified
    154     converter backend `convert_to`.
   (...)
    181         See `coremltools.converters.convert`
    182     """
--> 183     return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/converters/mil/converter.py:231, in _mil_convert(model, convert_from, convert_to, registry, modelClass, compute_units, **kwargs)
    224     package_path = _create_mlpackage(proto, weights_dir, kwargs.get("package_dir"))
    225     return modelClass(package_path,
    226                       is_temp_package=not kwargs.get('package_dir'),
    227                       mil_program=mil_program,
    228                       skip_model_load=kwargs.get('skip_model_load', False),
    229                       compute_units=compute_units)
--> 231 return modelClass(proto,
    232                   mil_program=mil_program,
    233                   skip_model_load=kwargs.get('skip_model_load', False),
    234                   compute_units=compute_units)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:346, in MLModel.__init__(self, model, useCPUOnly, is_temp_package, mil_program, skip_model_load, compute_units, weights_dir)
    343     filename = _tempfile.mktemp(suffix=_MLMODEL_EXTENSION)
    344     _save_spec(model, filename)
--> 346 self.__proxy__, self._spec, self._framework_error = _get_proxy_and_spec(
    347     filename, compute_units, skip_model_load=skip_model_load,
    348 )
    349 try:
    350     _os.remove(filename)

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/model.py:123, in _get_proxy_and_spec(filename, compute_units, skip_model_load)
    120     _MLModelProxy = None
    122 filename = _os.path.expanduser(filename)
--> 123 specification = _load_spec(filename)
    125 if _MLModelProxy and not skip_model_load:
    126 
    127     # check if the version is supported
    128     engine_version = _MLModelProxy.maximum_supported_specification_version()

File ~/.pyenv/versions/3.10.4/envs/venv/lib/python3.10/site-packages/coremltools/models/utils.py:210, in load_spec(filename)
    184 """
    185 Load a protobuf model specification from file.
    186 
   (...)
    207 save_spec
    208 """
    209 if _ModelPackage is None:
--> 210     raise Exception(
    211         "Unable to load libmodelpackage. Cannot make save spec."
    212     )
    214 spec = _Model_pb2.Model()
    216 specfile = filename

Exception: Unable to load libmodelpackage. Cannot make save spec.

zsong avatar May 05 '22 19:05 zsong

Coremltools does not yet support Python 3.10. So coremltools must have been installed using an egg, which is consistent with this error message (see #1348).

I'll leave this issue open to track Python 3.10 support. @zsong - until then please use Python 3.9 which we do support.

TobyRoseman avatar May 05 '22 21:05 TobyRoseman

@TobyRoseman Thank you very much for the quick response. I will give it a try.

zsong avatar May 06 '22 04:05 zsong

Very much looking forward to Python 3.10 support! My project requires it. 🥲

fumoboy007 avatar Jun 13 '22 07:06 fumoboy007

+1 😄

Is someone actively working on this right now or does it need some community love?

HeatherMa228 avatar Jun 14 '22 01:06 HeatherMa228

Is someone actively working on this right now or does it need some community love?

@HeatherMa228 - Currently, no one is actively working on it. Some help from the community would be great!

The first step is updating a few of our scripts to accept 3.10 as a value to the python parameter. At a minimum, we'll need to update build.sh, env_create.sh and env_activate.sh.

TobyRoseman avatar Jun 14 '22 18:06 TobyRoseman

Currently blocked by issues running the tests (#1532, #1533, #1534). Can’t proceed without tests!

fumoboy007 avatar Jun 16 '22 06:06 fumoboy007

Currently blocked by issues running the tests (#1532, #1533, #1534).

These are not blocking issues. They are not required for making the steps I have already outlined.

The next steps after that, would be trying to install the Python 3.9 wheel and fixing any issues. The step after that would be checking if you can import coremltools in a Python 3.9 environment and fixing any issues there. Those issues would also not block these steps.

TobyRoseman avatar Jun 16 '22 19:06 TobyRoseman

Hmm are you suggesting that build/import is enough to test support for a new Python version? I was hoping we could run the full test suite to guard against regressions.

fumoboy007 avatar Jun 17 '22 07:06 fumoboy007

@fumoboy007 - I'm not suggesting that at all. I'm explaining why you are not blocked.

TobyRoseman avatar Jun 28 '22 17:06 TobyRoseman

Coremltools 6.0 supports Python 3.10.

TobyRoseman avatar Sep 20 '22 16:09 TobyRoseman

is python 3.11 now supported? thanks

Because I'm running into a similar problem with 3.11.2

xmany avatar Apr 23 '23 00:04 xmany

@xmany - Python 3.11 is not currently supported, but will be soon. We're tracking that issue in #1730

TobyRoseman avatar Apr 24 '23 17:04 TobyRoseman