simple-llm-finetuner icon indicating copy to clipboard operation
simple-llm-finetuner copied to clipboard

Progressive output and cancel button for 'Inference' tab

Open 64-bit opened this issue 1 year ago • 2 comments

I have added progressive output to the inference tab by converting the generate function in app.py to a Python generator and producing tokens one at at time until either the output remains the same (end of stream reached) or the max tokens have been generated.

64-bit avatar Apr 09 '23 19:04 64-bit

I think I found a bug in this, I'm going to close the PR until I can figure out what is going on

64-bit avatar Apr 09 '23 21:04 64-bit

The bug in question can be reproduced on the original repo, Separate from this I will look into providing at the least detailed reproduction steps if not a resolution.

64-bit avatar Apr 09 '23 21:04 64-bit