Minh Bui

Results 8 comments of Minh Bui
trafficstars

Really helpful,thanks!!!!!!!

I believe you should specify port larger than 8000 for that, current ones is 7860. Mine solved by using port 8006. See line 468 in server/gradio_web_server.py

you run it from local or from docker? how about proxy? or try lsof -i -P -n | grep LISTEN to check if all the port is working?

Is there any documentation for inference Bart type model? Thanks.

Does it support MBartForConditionalGeneration model @afeldman-nm ? Thanks

Thanks for your fast response, i will try to finetune the new version. Can you produce about the GPU using in this? We can have a chat to solve this...