zsh_codex icon indicating copy to clipboard operation
zsh_codex copied to clipboard

Feature Request: Preview completions like zsh-autosuggestions

Open hashworks opened this issue 3 years ago • 10 comments

zsh-autosuggestions is able to provide a tab-autocompletion preview based on the .zsh_history:

tab-completion

Something similar would be useful for this plugin: This way, one could accept codex completions using <TAB> or discard them with any other key.

hashworks avatar Jul 07 '22 16:07 hashworks

Cool idea! However that would mean that everything you type in the command line is going to be sent to some remote server. Would you be comfortable with that?

tom-doerr avatar Jul 13 '22 02:07 tom-doerr

Not necessarily: Normally the .zsh_history suggestion should be shown, only when one presses the create_completion bindkey the context is sent to codex and the suggestion is replaced.

hashworks avatar Jul 13 '22 05:07 hashworks

In case anybody here is still interested, I hacked in an LLM-based backend to zsh-autosuggestions in this fork of it

@tom-doerr: I didn't compare them, but I wonder how the FIM-based prompting I'm using here with one of the code-tuned models compares to your call into the chat model (here). Sometimes I've had trouble getting the models to actually return just the code/structured information I want it to, but maybe your system prompt and adding #!/bin/zsh is pretty good at doing that

bamos avatar Apr 15 '24 06:04 bamos

Might make sense to just drop OpenAI altogether and use one of the open source finetune models that runs locally

tom-doerr avatar Apr 15 '24 15:04 tom-doerr

Might make sense to just drop OpenAI altogether and use one of the open source finetune models that runs locally

Yeah I agree, I'm doing it in there by running Code Llama locally with FastChat (serving it via OpenAI-compatible API).

And fine-tuning a small/fast local model on one's own shell history may be the key to faster completions, rather than including more history in the prompt every time. I'm finding the lag unfortunately noticeable and slow with the suggestions running for every character I type :/

bamos avatar Apr 15 '24 16:04 bamos

What model are you using? There seem to be good tiny LMs that might do a decent job

tom-doerr avatar Apr 15 '24 16:04 tom-doerr

I've been trying Code Llama 7b and WizardCoder-1B for now

bamos avatar Apr 15 '24 17:04 bamos

(Any other recommendations?)

bamos avatar Apr 15 '24 17:04 bamos

Also unrelated, you may be interested for this project how zsh-autosuggestions uses the history associative array, it should be pretty easy to plug in. Here, I'm extracting 10 relevant commands that match the pattern the user is typing, and the 10 latest commands from the user's shell interactions as context for the completion.

bamos avatar Apr 15 '24 17:04 bamos

Yeah there are a lot of ways to improve completions. I'm surprised there is still a lot of lag with a 1B model

tom-doerr avatar Apr 15 '24 19:04 tom-doerr