Bob123Yang
Bob123Yang
@arjunsuresh I encountered another downloading issue for https://zenodo.org/ where I cannot access. Could you help provide the backup downloading address for zenodo.org? Thanks. [log.txt](https://github.com/user-attachments/files/18053217/log.txt) 
Thank you @arjunsuresh , so do you mean non-nvidia implementation have the choice of changing the precision by the paramter for MLPerf?
Oh it's a pity, thanks! @arjunsuresh Could you help confirm one more question for NVIDIA multiple GPU scenario - how to run MLPerf inference on multiple GPUs which are connected...
Wo! but rnnt is still displayed in the page (https://github.com/mlcommons/inference) below: 
Thanks, Kevin @KevinHuSh @dosu but fail to access ollama service from ragflow server as below that would not be the real success, right? 
Pass with 192.168.1.26 replacing 127.17.0.1 127.17.0.1 is a fake ip and 192.168.1.26 is the real IP assigned to my wifi that can access to internet. 
@dosu From the logs I pasted as above, please tell me which IP I can use to login RAGFlow from a remote system with the network connection available to RAGFlow...
@dosu Fail to access the RAGFlow using the URL http://172.18.0.6:9380 in the Firefox from the remote system and also fail to "ping 172.18.0.6" from the remote system. I can get...
Thanks @asiroliu My RAGFlow server got the IP assigned to the WIFI card as 10.103.162.176 showed in the below picture that is the output of the command "ifconfig" running on...
Thank you Arjunsuresh! If using CM that means I should pull down the mlcommons@cm4mlops instead of the below closed/NVIDIA? why not use the NVIDIA distribution for NV GPU based inference?...