Chansung Park

Results 145 comments of Chansung Park

Dang I missed to upload it. Will share when I commit

@BEpresent that is something I don't know of. we need some experimentations. @DanielWe2 the code has been updated

@makovez the same dataset (actually the one cleaned-up in this repository).

@makovez you can actually use the notebook in my repository. here is the [link](https://github.com/deep-diver/Alpaca-LoRA-Serve/blob/main/notebooks/alpaca_lora_in_colab.ipynb). just remember to change the checkpoint right - Collab pro w/ premium GPU option would work...

@makovez This is just a playground. In order to reflect more reality, it should be changed a lot. My basic idea is to let users to choose when to summarize...

there should be lots of tweaks on pre/post processings. For instance, when the response is written in Markdown format, it wouldn't be rendered properly. It should be converted in HTML...

Yeap. at some point ChatGPT should be doing this since there is limit max input size. it depends how accurately remembers the details of the previous conversations. ( I mean...

it does not remember things (like it does not hold the context in memory). Instead, it consumes bunch of texts including many texts to know the context. It simply forward...

so similar approach to what LangChain does because it is very straight forward, and the behaviour of the model doesn't change. `Buffer` - I pass N previous conversations directly. `Summary`...

Sounds great. I think it is pretty much doable. Just need to figure out how to make this happen fast enough. As you see, it takes about 20-30 seconds to...