ToanDo
ToanDo
Hi @faisalhr1997 , after finishing converting model to TensorRT, you should restart google colab session and then execute this line of code. This problem might be from Google Colab. Let...
Hi @TalhaUsuf , we used the onnx files from [HuggingFace hub](https://huggingface.co/kamalkraj/stable-diffusion-v1-4-onnx/tree/main) which are converted from [this repo](https://github.com/kamalkraj/stable-diffusion-tritonserver/tree/master). You can reference this repo to convert Pytorch to Onnx. Another option is...
Hi @theblackcat102 , could you please let me know where we can download Open Assistant private datasets like oa_translated ? Thank you
Hi @jimafisk , Thank you for your suggestion. We will add support for Open Assistant datasets soon.
Hi @Guangxuan-Xiao , for Gelu in Bloom model. Do you implement W8A8B8O8Linear**Gelu** kenel for it or implement custom Gelu activation function to deal with 8byte datatype output of W8A8B8O8Linear?
Hi @ClementCJ , thank you for your suggestion. At the moment, we do not have the plan to integrate to WebUI. We hope that you can do the experiment to...
Hi @ClementCJ, I think that inputs datatype from TensorRT model may not been supported by TensorRT. Please make sure that inputs are in float32 type. You can print out engine.get_binding_dtype(binding)...
Hi @appleatiger, we have supported for more gpus like rtx 3080, 3090, 4080, 4090,... Please reinstall our latest stochasticx library , try again and let us know if any issue....
Hi @appleatiger, have you install nvidia-runtime in your machine and set the docker default runtime is nvidia-runtime ?