llama.cpp
llama.cpp copied to clipboard
Feature Request: echo=true in llama-server
Prerequisites
- [X] I am running the latest code. Mention the version if possible as well.
- [X] I carefully followed the README.md.
- [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
- [X] I reviewed the Discussions, and have a new and useful enhancement to share.
Feature Description
The llama-server allows api calls with logprobs=1
, but it would be very nice to also include the option to set echo=True
, as was available for older OpenAI models such as davinci-002
.
Motivation
This would allow for a number of interesting possibilities such as inferring the likelihood of a prompt given a completion, as done in this project.
OpenAI depreciates the echo
option because it's too useful :) would be great to have it back in llama.cpp.
Possible Implementation
No response