cesarandreslopez
cesarandreslopez
Same question here. Any suggestions would be welcomed. @nwojke @deamonDevs
@longzeyilang will you be making a pull request with your version of the loss function? I have found that yours produces better results. Thank you both!
This would be of great value, agreed.
404 https://pypi.org/project/py-translator/
I'm looking forward to seeing JSON Schemma support merged!
Seeing the same here for cuda 12.3
@pklochowicz in case it's useful for you, this will work with CUDA support: ``` RUN CMAKE_ARGS="-DGGML_CUDA=on" pip install llama-cpp-python==v0.3.5 ```
As of my tests today, setting up verbose=False on the chathandler resolves this. It does need to be explicitly set though.