jan
jan copied to clipboard
bug: There is only one model to use via Nvidia NIM API
- [X] I have searched the existing issues
Current behavior
I can only see one model to use via Nvidia NIM API
Minimum reproduction step
Go to selecting the models dropdown and scroll to Nvidia Section and you will see only one Model which is Mistral 7B
Expected behavior
I should be able to see at least all the available models on Nvidia NIM (Gemma 2 27B, Llama 3 70B etc)
Screenshots / Logs
Jan version
v0.5.1 stable
In which operating systems have you tested?
- [ ] macOS
- [X] Windows
- [ ] Linux
Environment details
OS: Windows 11 Processor: AMD Ryzen 9 7950X3D RAM: 32GB DDR5 Graphics Card: Integrated Graphics
@Realmbird You made this extension right? I know it's very easy to solve I tried to solve this by going to the model.json file in the extensions/inference-nvidia-extension/resources but it didn't work
I confirm. The Nvidia NIM support a larger set of models to be called for completions but they are not listed.
Ideally, make the listing dynamic for all the available models
related to: https://github.com/janhq/jan/issues/3374
We will need to refactor this into a larger feature revamp for "Remote Model Extensions"
- OpenRouter models (multiple models)
- Nvidia NIM
Closing as a dupe of: https://github.com/janhq/jan/issues/3374