open-interpreter
open-interpreter copied to clipboard
[WIP] feat: allow editing of code blocks before execution
Describe the changes you have made:
This work-in-progress introduces the ability to edit code in your default editor before running it.
When Open Interpreter asks if you want to run the provided code, there is a new option, %edit
, which will allow you to edit the code.
It currently waits for you to come back and hit ENTER
to continue execution.
Unfortunately, it seems to automatically run the edited code, and I haven't been able to get it back into the scan
/run
loop.
I tried using a few different file/directory watching Python packages, but it seems the file close event is not very easy to listen for and not consistent across platforms, so for the initial proof of concept, I’m relying on user intervention.
Demo
Testing Instructions
-
gh pr checkout https://github.com/KillianLucas/open-interpreter/pull/612
-
poetry run interpreter
Note:auto_run
must be disabled so don't run it with-y
or withauto_run: true
in yourconfig.yaml
. - Provide an input that generates code
Example:
Solve FizzBuzz for 0 through 17. Don't explain the code or tell me your process or how FizzBuzz works. Just generate the code so we can execute it.
- When asked if you want to run the code, enter the
%edit
magic command - Edit the code and save your changes
- Come back to the Terminal and hit
ENTER
Reference any relevant issue (Fixes #537)
- [x] I have performed a self-review of my code:
I have tested the code on the following OS:
- [ ] Windows
- [x] MacOS
- [ ] Linux
AI Language Model (if applicable)
- [ ] GPT4
- [ ] GPT3
- [ ] Llama 7B
- [ ] Llama 13B
- [ ] Llama 34B
- [ ] Huggingface model (Please specify which one)
Would love any ideas, suggestions, or assistance getting the edited code block to render as a syntax highlighted code block and ask the user for confirmation before executing.
@ericrallen think i figured this out but might have botched the push to git. will tag u in my PR
I can help with that,
Working on updating the local LLM docs with my latest changes and a full detailed tutorial. once done let me take a look and solve the issue, are, just verifying, before jumping in, are you working with Verbose on an using it on both open-interpreter and Litellm open side by side? using break points also in Litellm? and catching events?
✅i hope the answer is yes,
because otherwise,even when you run only Op-In you clearly see the white output of Opin with the prefix and suffix to the user system prompt and then you see a green block of code, that is the forwarding of Litellm after phrasing a little the system message again before retrieving it back to Op-In.
❌If no and last sync was 5 months ago first better re-sync ♻ the branch
I will submit a pull request soon for the docs and one get your feedback can sync pull newer files, and fix the intervention in the system prompt manually, in my case i need just to remove the word 'prompt' which is pushed to the system file among other chat related data and the LOGs.
If we are 5 months behind commits, no use investing in this version unless there is something i'm not aware of with main pull IMO better checkout / pull request the newest version and solve it with a 'man in the middle' as i did with integration of llamma.cpp :
@BellaBijl wdyt?
CC ->(@sbendary25 @ericrallen @Notnaton ) -> feedback would be highly welcome
InterwebAlchemy:feature/edit-code
__