David S. Batista
David S. Batista
Hi @tsmith023! Did you have the chance to work on this issue?
@alperkaya I've reverted some of the changes you did, and added a few more tests - thanks for your contribution.
@silvanocerza applied your changes request
For the tests to pass, we will have to wait for a new release of haystack with the `TextEmbedder` protocol. But @sjrl feel free to give a second review
This was already previously discussed for Weavite https://github.com/deepset-ai/haystack-core-integrations/issues/471
to release async connections/resources the equivalent is `__aexit__`
both `NvidiaDocumentEmbedder` and `NvidiaTextEmbedder` define the backend as: ```python self.backend: Optional[Any] = None ``` while the `NvidiaGenerator` defines it as ```python self._backend: Optional[NimBackend] = None ``` maybe we can normalise...
```python class NvidiaGenerator: """ Generates text using generative models hosted with [NVIDIA NIM](https://ai.nvidia.com) on on the [NVIDIA API Catalog](https://build.nvidia.com/explore/discover) ``` typo above, double "on"
what about renaming: `integrations/nvidia/src/haystack_integrations/utils/nvidia/statics.py` to `model.py` since all the code on that file is related with Model information?
both `NvidiaDocumentEmbedder` and `NvidiaTextEmbedder` have the backend as: ```python self.backend: Optional[Any] = None ``` while the `NvidiaGenerator`: as ```python self._backend: Optional[Any] = None ``` let's have them all either as...