Chansung Park
Chansung Park
@DanielWe2 thanks! good that you liked :)
@alexanderfrey in the streaming, most of the parameters in `GenerationConfig` are not supported. Do you think you can make that happen?
Hey @jakoblorz ! First of all, thanks for the PR. It sounds very intersting. Could you share some resources that I can test this code on M1? I have Mac...
sorry, I was not targeting multi GPUs environment at the moment. if you think you can, please propose a PR :)
https://github.com/deep-diver/LLM-As-Chatbot
Oh will fix it by setting no half in case of t5 vicuna. Thanks for reporting this
It seems like there is a sort of internal error in Hugging Face Hub Infra
Yes Just create single pinpongs (list) Then share it with two different prompt objects
for example, lets say you want to switch between different prompting styles of from `Alapca` and `StableLM` while underlying chats are shared ```python from pingpong.gradio import GradioAlpacaChatPPManager from pingpong.gradio import...
what kind of changes have you made?