Chris Chen

Results 16 comments of Chris Chen

``` # https://forums.developer.nvidia.com/t/issues-running-deepstream-on-wsl2-docker-container-usr-lib-x86-64-linux-gnu-libcuda-so-1-file-exists-n-unknown/139700/4 RUN cd /usr/lib/x86_64-linux-gnu && rm libnvidia*.so.1 ||: RUN cd /usr/lib/x86_64-linux-gnu && rm libcuda.so.1 ||: RUN cd /usr/lib/x86_64-linux-gnu && rm libnvcuvid.so.1 ||: ``` can be fixed by adding...

I replaced gpt-4-32k with GPT-3.5-Turbo-16k and so far so good

> I tried to replace gpt-4-32k with GPT-3.5-Turbo-16k and I get this error now. I am sure I am missing something. > > InvalidRequestError: The model `GPT-3.5-Turbo-16k` does not exist...

> ![Screenshot 2023-10-08 051943](https://user-images.githubusercontent.com/76617481/273438829-b4c6628c-f0ce-421d-93a6-3bc48861e2fb.png) > > This is what I am getting for an error. can you try "gpt-3.5-turbo-16k" instead of "gpt-3.5-Turbo-16k"?

yes, it's possible but would require lots of refactoring. However, it could be easier by connecting to another local endpoint service. e.g. [oobabooga](https://github.com/oobabooga/text-generation-webui) https://www.reddit.com/r/LocalLLaMA/comments/15fxron/best_llama2_model_for_storytelling/

@renman87 looks like you still need to add aframe-gradient-sky in dependency to make it work...