Philipp Emanuel Weidmann
Philipp Emanuel Weidmann
I have since found a more suitable solution for my original problem than using a rope, so I won't be adding this myself. But I did think about it some...
@oobabooga Could you reply to the concern I've raised above regarding that syntax? This will define the API, and I'm struggling to see a clean way for clients to build...
@oobabooga PR updated! * `dry_sequence_breakers` can now be specified *either* as a comma-separated list of quoted strings, or as a JSON array. This works both in the UI and over...
@kalomaze > Did you notice it was better than just increasing Min P to compensate for your particular use case or are you just experimenting? I have indeed found P-Step...
@HiroseKoichi > Assuming I did the math right, to my knowledge, if a token's probability x p_step is higher than the next token down, then you truncate the rest of...
@HiroseKoichi I have several other novel samplers that I'm experimenting with, but I probably won't try to upstream any more of them. Getting something new merged in these big projects...
The original implementation has now been merged into text-generation-webui. There were no further algorithmic changes, so this PR implements the same sampling algorithm that is now available in TGWUI.
This PR has now been open for 3 full months, without a single comment from any of the maintainers, despite it being the second-most upvoted PR in this repository, and...
AFAICT, there is currently no logic that allows one to actually use DRY from any of the llama.cpp programs. This should probably be added, along with API support for llama-server,...
Ok... but why? What is the difference between those settings and those that are stored on the server? I connect to my server from multiple devices and I'd like to...