Muhammad Harris
Muhammad Harris
this code needs a redo. looks like written by a 2 year old (SE years)
Hi so i am using nvidia-jetson nano arm64 i have tried a lot of things for showing a complete stacktrace but all ended up in vain: Running sample bracktrace_test on...
You need have the original number of coco classes to load the model. this happened to me when i changed the classes from the config to run yymnist training classes.
@officialabdulrehman can you mention the server version and the sdk version that you are using. this seems to be a problem while using it in `React`.
also stuck on the same problem :( mounter was goofsys. other mounters didn't even come close to working. Things i have tried: 1. running the container as root with security...
is #1054 related to this?
@lacasseio any leads on this?
[Link](https://github.com/zzh8829/yolov3-tf2/blob/master/colab_gpu.ipynb) to yolov3-tf colab example
@h4ckm1n-dev you can pass in edit pram through setup as well: ``` openai_edit_params = { model = "", frequency_penalty = 0, presence_penalty = 0, temperature = 0, top_p = 1,...
having the same issue. 8 bit quantized model is outputting gibberish. while 4 bit is working fine. Also AutoGPTQ on cpu is working with 8 bit. having issues on cuda...