nim-anywhere
nim-anywhere copied to clipboard
Accelerate your Gen AI with NVIDIA NIM and NVIDIA AI Workbench
Currently are down as LLM-NIM-0 and LLM-NIM-1, these are actually both the llama3-8B-instruct model and can be viewed within the environment variables, but this should be more accessible.
You can end up with an old incomplete `config.yaml` across releases. This is a common problem. Don't know if it should be fixed or just documented. ### Steps 1. Run...
Which boxes in the diagram are the deployed components * Chat Server * Chain Server 
`/docs/_static/screenshot.png` and `/docs/_static/na_frontend.png` show the topology from a previous version. #### Screenshots that might work  
What are the DNS names for the redis and milvus server? They are referred to as `localhost` in some places and as `redis` and `mivlus` in others. They are pingable...
Updating docs requires additional utility software and a Make environment. Those aren't called out in https://github.com/nvidia/nim-anywhere?tab=readme-ov-file#updating-documentation Also, the makefile assumes you are running a linux shell which is fine but...
In personal key generation, number of service included are different for internal and external users and it changes through NGC's development. Should be "check all boxes".
the quick start should start with a pre-reqs section that lists different accounts that are required: - github - ngc