OpenHands
OpenHands copied to clipboard
Adding llama-3.1-nemotron-70b-instruct
Currently it is not possible to add the model from Nvidia. It would be great for this possibility as the api costs are quite low.
It doesn't work if you enable Advanced Options and set it as the model?
Here is the litellm docs: https://docs.litellm.ai/docs/providers/nvidia_nim
Unless it's not supported by litellm
Yes @David-Sola , you could access the model via any hosting API that supports the model. For instance OpenRouter should work
Going to close this as you can specify the model using Advanced Options, you don't need it in the dropdown.