optimum-intel icon indicating copy to clipboard operation
optimum-intel copied to clipboard

List available backend providers (ipex, openvino)

Open michaelfeil opened this issue 1 year ago • 1 comments

We are looking for a way how to find out which "backends" are available.

Is there an equivalent of:

import onnxruntime as ort

providers = ort.get_available_providers()
print(providers)

We don't have the luxury of trying out a OVAutoModel with openvino, notice its not available & then switch to the default Optimum model. Any way to detect this?

FYI: We are working on https://github.com/michaelfeil/infinity/pull/454.

michaelfeil avatar Nov 11 '24 17:11 michaelfeil

Hi @michaelfeil, apologies for the delay! If I'm understanding it correctly you'd like to check whether optimum-intel was correctly installed before loading your model with a OVModelXxx class ? Something that you can do is to check importlib.util.find_spec("optimum.intel") directly : here is an example of what has been done for langchain for example https://github.com/langchain-ai/langchain/blob/6c7c8a164f17194c83cece0c5e74a110c29c36a7/libs/partners/huggingface/langchain_huggingface/utils/import_utils.py#L78 and https://github.com/langchain-ai/langchain/blob/6c7c8a164f17194c83cece0c5e74a110c29c36a7/libs/partners/huggingface/langchain_huggingface/llms/huggingface_pipeline.py#L147

Also before loading your model, make sure openvino is also correctly installed, you can use directly is_openvino_available : https://github.com/huggingface/optimum-intel/blob/3ff8dc1ba7e9cefe927ce5a33418cbaee15ab72b/optimum/intel/utils/import_utils.py#L211

Let me know if that helps or if you'd need some additional integration in optimum to make things easier

echarlaix avatar Feb 27 '25 17:02 echarlaix

@michaelfeil any update on this ?

echarlaix avatar Apr 28 '25 09:04 echarlaix