Client Reject Incompatible models
As a user of the connectors, I can select from many modals and want to be alerted early if a model is not compatible with my use case.
Example test, w/ public_class of NvidiaGenerator, NvidiaTextEmbedder, NvidiaDocumentEmbedder for HayStack -
invalid = select known model that does not work with public_class, e.g. NV-Embed-QA and NvidiaGenerator public_class(model=invalid, nvidia_api_key="a-bogus-key")
cc: @sumitkbh @mattf
@mattf ptal
both NvidiaDocumentEmbedder and NvidiaTextEmbedder define the backend as:
self.backend: Optional[Any] = None
while the NvidiaGenerator defines it as
self._backend: Optional[NimBackend] = None
maybe we can normalise this?
class NvidiaGenerator:
"""
Generates text using generative models hosted with
[NVIDIA NIM](https://ai.nvidia.com) on on the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover)
typo above, double "on"
what about renaming:
integrations/nvidia/src/haystack_integrations/utils/nvidia/statics.py to model.py since all the code on that file is related with Model information?
both NvidiaDocumentEmbedder and NvidiaTextEmbedder have the backend as:
self.backend: Optional[Any] = None
while the NvidiaGenerator: as
self._backend: Optional[Any] = None
let's have them all either as private or public, just for coherence.
@davidsbatista I have made requested changes and do not have access to merge the request, pls do it.