jupyter-ai
jupyter-ai copied to clipboard
Default model for %%ai magic command
Problem
I would like to use %%ai cell magic more, but I don't want to type the name of my model each time. Typing 4 characters for "%%ai" plus "b" for a new cell is already more typing than a click for a "new chat" in ChatGPT UI.
Proposed Solution
Make the MODEL_ID positional argument optional. The default value should come from the model configured in the chat interface UI.
Additional Notes
Thank you for this fantastic time saving extension!
Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! :hugs:
If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! :wave:
Welcome to the Jupyter community! :tada:
The magic commands can run independently of the chat UI, such as when using the IPython CLI application, but this is something that would improve user quality of life.
As an alternative, we could make %%ai default to the last-used model in a magic command, if no model is specified. More design work is still needed.
Last-used model sounds good, though it does add hidden state and make notebooks less rerunnable.
On the other hand %%ai chatgpt --format code adds a new cell and is even less rerunnable.
Could all arguments to %%ai be defaulted to last-specified if none are specified?
I retitled this issue to leave more of our options open. Especially for some of our more complex commands, like %%ai with SageMaker endpoints, reusing options from prior commands would save a lot of typing or pasting.
Last-used model sounds good, though it does add hidden state and make notebooks less rerunnable.
Why not forcing users to specify the model once in their notebook (at the beginning) and then re-using the same model as long as the kernel is alive.
The default value should come from the model configured in the chat interface UI.
Implementing this will be tricky. It is possible for chat to send a command to the kernel, but this would be hard to support in a language-agnostic way. Maybe it would be possible to pass an environment variable to the kernel, but this is also a hard problem for remote kernels.
A message in jupyter messaging protocol level for for configuring environment variables might the be best here - this way jupyter-ai chat could sent such a message to all kernels and change the environment variables defining which model to use, and it would be language agnostic.
However, for now we do not need to worry about being language agnostic that much, because jupyter-ai only supports Python; so we only need to not send an execution request to non-Python kernels which is easier. What do folks think about it?
@haesleinhuepf this is exactly what I am proposing in https://github.com/jupyterlab/jupyter-ai/pull/962 :)
Thank you @krassowski , %config is the right solution.