Add NvidiaChatGenerator component
Is your feature request related to a problem? Please describe.
The current Nvidia integration comprises a NvidiaTextEmbedder, NvidiaDocumentEmbedder, NvidiaRanker andNvidiaGenerator but no NvidiaChatGenerator.
We should implement a NvidiaChatGenerator component and add a code example to https://haystack.deepset.ai/integrations/nvidia
Describe the solution you'd like
nvidia/src/haystack_integrations/utils/nvidia/nim_backend.py contains a generate method that uses the /chat/completions endpoint.
We should implement a chat_generate in nim_backend.py. After that generate can probably be refactored to call chat_generate with a list containing a single ChatMessage where the "role" is set to "user".
Note that we are implementing a run_async in all ChatGenerators too. An initial implementation of NvidiaChatGenerator without run_async would be okay but better if we add it too. https://github.com/deepset-ai/haystack-core-integrations/issues/1379
Describe alternatives you've considered
Additional context Adding this component was suggested on discord.
I wonder if this would even be necessary because my understanding is that Nvidia NIM can be hosted locally or used through their API, so this could all be accomplished through the OpenAIChatGenerator (they even recommend using OpenAI python library here). There doesn't seem to be anything "unique" to the Nvidia NIM API that would make it necessary to make a generator for those unique features alone. Since it is OpenAI API compliant, this would seem redundant. At least with Llama.cpp or Ollama chat generators that we have, they have some additional unique features/parameters that makes it a good idea to use their dedicated chat generators as opposed to OpenAIChatGenerator with an api_base_url