gp.nvim icon indicating copy to clipboard operation
gp.nvim copied to clipboard

Assistants and better workflow

Open teocns opened this issue 2 years ago • 8 comments

Hey there, I was trying to search for a suitable plugin to adopt, but couldn't find any yet.

First of all congrats for your product!

These are some of the features that I'd like see implemented in an ideal plugin, especially with the latest OpenAI Assistant API...

At the very basic I'd fancy a general dashboard that shows the current assistant in use, the thread (if any), and well, the chat box.

    1. Threads switcher: be able to switch between threads (perhaps open them with telescope)
    1. Assistant manager: be able to switch between assistants (perhaps open them with telescope)
    1. Chat box: be able to chat with the assistant
    1. Refactoring: open the assistant with context of the current file (or selection), insert a prompt, and open a side window comparing the original code with the refactored code. The user is in operation mode, at this point, and they can have the following options: accept, view diff (starts diffthis on both windows).

Do you think this is something you might feel a spark of interest for? I'm also open to contribute

teocns avatar Nov 08 '23 22:11 teocns

@teocns Hey, I'll have to look into Assistant API in detail, but from what I've seen so far I certainly see the benefits. Tool use, working with non textual files (pdf, imgs) and providing them as a context, switching between assistants in a thread..

I also see the negatives, the assistant doesn't support streaming output yet which kinda sucks from user perspective - you hit query and wait an unknown amount of time. Plus even greater vendor lock in and some people might not like storing persistently their data/code in OpenAI cloud.

Concerning the points mentioned:

  • i. :GpChatFinder
  • ii. yeah dynamic switching between models and (locally specified) personas was on my todo list even before the OpenAI dev-day
  • iii. :GpChatNew, :GpChatToggle, :GpChatPaste
  • iv. :GpRewrite, GpAppend, GpPrepend etc. - personally I use them to refactor in place and use gitsigns.nvim to see changed lines, jump between changes, preview hunk diffs or opening full diffthis. The beauty of (neo)vim is among other things in the ability to combine orthogonal extensible tools - I'm not closing door on dedicated refactoring pipeline, but I'd like to first hear the benefits over what I've just described.

Robitx avatar Nov 09 '23 00:11 Robitx

Thanks @Robitx, indeed many of the features that you described, and I personally had the chance to test, complement my workflow seamlessly.

However I do have to mention that I experience serious insert-mode typing lag only after the conversation was initiated and a response from GPT was generated. For reference, I use Astronvim; I have tried disabling syntax highlight and LSP - anything that comes in your mind intuitively that you consider might be causing this issue?

Note: the issue won't persist by restarting nvim and reopening the same chat buffer, but will occur after the next message.

teocns avatar Nov 09 '23 21:11 teocns

@teocns well after the first GPT response in the chat buffer, there is a check if the chat Title is unset (# topic: ?) and if so makes a second GPT call asking it to generate the topic. Can you check if it corresponds to the lag you're observing?

Otherwise if you have public nvim configuration (or if you're willing to provide it to me privately) I can check the behavior and try to debug it.

Robitx avatar Nov 09 '23 21:11 Robitx

I can confirm # topic isn't the issue.

Here's my astronvim user config, if you're willing to give it a chance. This contains the user custom configuration - you will need to setup astronvim aside, and then place this configuration under the user directory.

teocns avatar Nov 09 '23 22:11 teocns

I moved this to #53

teocns avatar Nov 09 '23 22:11 teocns

I would like to move a bit ahead with the Assistants topic

I also see the negatives, the assistant doesn't support streaming output yet which kinda sucks from user perspective - you hit query and wait an unknown amount of time. Plus even greater vendor lock in and some people might not like storing persistently their data/code in OpenAI cloud.

I personally see output streaming as a more UI-breaking feature - neither ChatGPT's GUI managed to yet get a comfortable UX on the cursor side while the stream is running.

On another note, I was taking a look at pynvim to integrate it with the Assistants API. Now, this is an idea from a non-experienced vim user, and I can intuitively recognise that tying this feature to Python can come with portability and other issues. Hence, what's your take? Plain HTTP API, or Python SDK?

teocns avatar Nov 09 '23 22:11 teocns

@teocns less dependencies means less breaking points outside of your control, so personally I would go with Plain HTTP API (within reason, depends on amount of complexity hidden by SDK, but these days GPT can help with migration).

Robitx avatar Nov 09 '23 22:11 Robitx

@teocns for the agent telescope picker, add the following to your config= function().

      local pickers = require 'telescope.pickers'
      local finders = require 'telescope.finders'
      local actions = require 'telescope.actions'
      local action_state = require 'telescope.actions.state'
      local conf = require('telescope.config').values

      local models = function(opts)
        local buf = vim.api.nvim_get_current_buf()
        local file_name = vim.api.nvim_buf_get_name(buf)
        local is_chat = require('gp').not_chat(buf, file_name) == nil

        opts = opts or {}
        pickers
          .new(opts, {
            prompt_title = 'Models',
            finder = finders.new_table {
              results = is_chat and require('gp')._chat_agents or require('gp')._command_agents,
            },
            sorter = conf.generic_sorter(opts),
            attach_mappings = function(prompt_bufnr)
              actions.select_default:replace(function()
                local selection = action_state.get_selected_entry()
                actions.close(prompt_bufnr)
                require('gp').cmd.Agent { args = selection[1] }
              end)
              return true
            end,
          })
          :find()
      end

      vim.keymap.set('n', '<C-g>z', function()
        models(require('telescope.themes').get_dropdown {
          winblend = 10,
          previewer = false,
        })
      end, {
        noremap = true,
        silent = false,
        nowait = true,
        desc = 'GPT prompt Choose Agent',
      })

qaptoR avatar Jul 20 '24 08:07 qaptoR