avante.nvim icon indicating copy to clipboard operation
avante.nvim copied to clipboard

bug: edit mode fails to apply edits/deletes codeblocks

Open twilwa opened this issue 8 months ago • 1 comments

Describe the bug

   Error  13:33:01 msg_show.lua_error Error executing vim.schedule lua callback: ...cal/share/nvim/lazy/avante.nvim/lua/avante/selection.lua:164: Vim:Error executing lua callback: ...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: attempt to get length of local 'prev_line' (a nil value)
stack traceback:
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: in function 'compute_end_range'
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:401: in function 'compute_diff'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:106: in function 'incremental_changes'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:311: in function 'send_changes_for_group'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:348: in function 'send_changes'
	.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:869: in function <.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:863>
	[C]: in function 'nvim_buf_set_lines'
	...cal/share/nvim/lazy/avante.nvim/lua/avante/selection.lua:164: in function ''
	vim/_editor.lua: in function <vim/_editor.lua:0>
stack traceback:
	[C]: in function 'nvim_buf_set_lines'
	...cal/share/nvim/lazy/avante.nvim/lua/avante/selection.lua:164: in function ''
	vim/_editor.lua: in function <vim/_editor.lua:0>
   Error  13:33:03 msg_show.lua_error Error executing lua callback: ...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: attempt to get length of local 'prev_line' (a nil value)
stack traceback:
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: in function 'compute_end_range'
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:401: in function 'compute_diff'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:106: in function 'incremental_changes'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:311: in function 'send_changes_for_group'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:348: in function 'send_changes'
	.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:869: in function <.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:863>
   Error  13:33:03 msg_show.lua_error Error executing lua callback: ...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: attempt to get length of local 'prev_line' (a nil value)
stack traceback:
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: in function 'compute_end_range'
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:401: in function 'compute_diff'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:106: in function 'incremental_changes'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:311: in function 'send_changes_for_group'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:348: in function 'send_changes'
	.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:869: in function <.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:863>
   Error  13:33:03 msg_show.lua_error Error executing lua callback: ...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: attempt to get length of local 'prev_line' (a nil value)
stack traceback:
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:195: in function 'compute_end_range'
	...ar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp/sync.lua:401: in function 'compute_diff'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:106: in function 'incremental_changes'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:311: in function 'send_changes_for_group'
	....11.0/share/nvim/runtime/lua/vim/lsp/_changetracking.lua:348: in function 'send_changes'
	.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:869: in function <.../Cellar/neovim/0.11.0/share/nvim/runtime/lua/vim/lsp.lua:863>
13:33:03 msg_show 12 more lines; before #6  2 seconds ago

To reproduce

attempt edit mode on any code block

Expected behavior

successful edit

Installation method

Use lazy.nvim:

return {
  "yetone/avante.nvim",
  event = "VeryLazy",
  version = false, -- Never set this value to "*"! Never!
  opts = {
    -- add any opts here
    -- for example
    provider = "gemini",
    auto_suggestions_provider = "openrouterquick",
    -- cursor_applying_provider = "groq",
    web_search_engine = {
      provider = "tavily", -- tavily, serpapi, searchapi, google or kagi
    },
    mappings = {
      ask = "<leader>ak", -- ask
      edit = "<leader>ae", -- edit
      -- refresh
    },
    system_prompt = function()
      local hub = require("mcphub").get_hub_instance()
      return hub:get_active_servers_prompt()
    end,
    -- Add your custom_tools function
    custom_tools = function()
      return {
        require("mcphub.extensions.avante").mcp_tool(),
      }
    end,
    azure = {
      endpoint = "https://models.inference.ai.azure.com", -- example: "https://<your-resource-name>.openai.azure.com"
      deployment = "", -- Azure deployment name (e.g., "gpt-4o", "my-gpt-4o-deployment")
      api_version = "2024-12-01-preview",
      timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      temperature = 0,
      api_key_name = "GITHUB_TOKEN",
      model = "chatgpt",
      -- Increase this to include reasoning tokens (for reasoning models)
      reasoning_effort = "high", -- low|medium|high, only used for reasoning models
    },
    openai = {
      endpoint = "https://api.openai.com/v1",
      model = "o3-mini-high", -- your desired model (or use gpt-4o, etc.)
      timeout = 600000, -- Timeout in milliseconds, increase this for reasoning models
      temperature = 0,
      api_key_name = "OPENAI_API_KEY",
      --
      max_tokens = 16384, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    claude = {
      --  endpoint = "https://api.openai.com/v1",
      --  model = "gpt-4o", -- your desired model (or use gpt-4o, etc.)
      --  timeout = 3000:0, -- Timeout in milliseconds, increase this for reasoning models
      --  temperature = 0,
      api_key_name = "ANTHROPIC_API_KEY",
      --  max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    groq = {
      api_key_name = "GROQ_API_KEY",
      endpoint = "https://api.groq.com/openai/v1/",
      model = "llama-3.3-70b-versatile",
      max_completion_tokens = 128000,
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    openrouterthink = {
      -- endpoint = "https://api.openai.com/v1",
      model = "deepseek/deepseek-r1:free", -- your desired model (or use gpt-4o, etc.)
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      timeout = 600000,
      api_key_name = "OPENROUTER_API_KEY",
      max_completion_tokens = 64000,
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    copilot = {
      -- endpoint = "https://api.openai.com/v1",
      model = "claude-3.7-sonnet", -- your desired model (or use gpt-4o, etc.)
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    copilotthink = {
      -- endpoint = "https://api.openai.com/v1",
      model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
      timeout = 600000,
      max_completion_tokens = 64000,
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      reasoning_effort = "high", -- low|medium|high, only used for reasoning models
    },
    gemini = {
      -- endpoint = "https://api.openai.com/v1",
      model = "gemini-2.5-pro-exp-03-25", -- your desired model (or use gpt-4o, etc.)
      -- disabled_tools = { "str_replace" },
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    openrouterquick = {
      -- endpoint = "https://api.openai.com/v1",
      model = "deepseek/deepseek-chat-v3-0324:free", -- your desired model (or use gpt-4o, etc.)

      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },

    vendors = {
      copilotthink = {
        __inherited_from = "copilot",

        model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
        max_completion_tokens = 128000,
      },
      groq = {
        __inherited_from = "openai",
        api_key_name = "GROQ_API_KEY",
        endpoint = "https://api.groq.com/openai/v1/",
        model = "llama-3.3-70b-versatile",
        max_completion_tokens = 128000,
      },
      openrouterthink = {
        __inherited_from = "openai",
        endpoint = "https://openrouter.ai/api/v1",
        api_key_name = "OPENROUTER_API_KEY",
        model = "deepseek/deepseek-r1:free",
        max_completion_tokens = 128000,
      },
      openrouterquick = {
        __inherited_from = "openai",
        endpoint = "https://openrouter.ai/api/v1",
        api_key_name = "OPENROUTER_API_KEY",
        model = "deepseek/deepseek-chat-v3-0324:free",
        max_completion_tokens = 128000,
      },

      embed_model = "granite-embedding:30m", -- The embedding model to use for RAG service
      endpoint = "http://localhost:11434", -- The API endpoint for RAG service
      docker_extra_args = "", -- Extra arguments to pass to the docker command
    },
    dual_boost = {
      enabled = false,
      first_provider = "openrouterthink",
      second_provider = "copilotthink",
      prompt = "Based on the two reference outputs below, generate a response that incorporates elements from both but reflects your own judgment and unique perspective. Do not provide any explanation, just give the response directly. Reference Output 1: [{{provider1_output}}], Reference Output 2: [{{provider2_output}}]",
      timeout = 60000, -- Timeout in milliseconds
    },
    behaviour = {
      auto_focus_sidebar = true,
      auto_suggestions = true, -- Experimental stage
      auto_suggestions_respect_ignore = true,
      auto_set_highlight_group = true,
      auto_set_keymaps = true,
      auto_apply_diff_after_generation = false,
      jump_result_buffer_on_finish = false,
      support_paste_from_clipboard = true,
      minimize_diff = true,
      enable_token_counting = true,
      enable_cursor_planning_mode = false,
      enable_claude_text_editor_tool_mode = false,
      use_cwd_as_project_root = false,
    },
    file_selector = {
      --- @alias FileSelectorProvider "native" | "fzf" | "mini.pick" | "snacks" | "telescope" | string | fun(params: avante.file_selector.IParams|nil): nil
      provider = "fzf",
      -- Options override for custom providers
      provider_opts = {},
    },
  },
  -- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
  build = "make",
  -- build = "powershell -ExecutionPolicy Bypass -File Build.ps1 -BuildFromSource false" -- for windows
  dependencies = {
    "nvim-treesitter/nvim-treesitter",
    "stevearc/dressing.nvim",
    "nvim-lua/plenary.nvim",
    "MunifTanjim/nui.nvim",
    "ravitemer/mcphub.nvim",
    --- The below dependencies are optional,
    "echasnovski/mini.pick", -- for file_selector provider mini.pick
    "nvim-telescope/telescope.nvim", -- for file_selector provider telescope
    "hrsh7th/nvim-cmp", -- autocompletion for avante commands and mentions
    "ibhagwan/fzf-lua", -- for file_selector provider fzf
    "nvim-tree/nvim-web-devicons", -- or echasnovski/mini.icons
    "zbirenbaum/copilot.lua", -- for providers='copilot'
    {
      -- support for image pasting
      "HakonHarnes/img-clip.nvim",
      event = "VeryLazy",
      opts = {
        -- recommended settings
        default = {
          embed_image_as_base64 = false,
          prompt_for_file_name = false,
          drag_and_drop = {
            insert_mode = true,
          },
          -- required for Windows users
          use_absolute_path = true,
        },
      },
    },
  },
}

Environment

➜ nvim -v NVIM v0.11.0 Build type: Release LuaJIT 2.1.1741730670 Run "nvim -V1 -v" for more info

➜ uname -a Darwin anons-MacBook-Pro.local 24.4.0 Darwin Kernel Version 24.4.0: Wed Mar 19 21:17:37 PDT 2025; root:xnu-11417.101.15~1/RELEASE_ARM64_T6031 arm64

Repro

vim.env.LAZY_STDPATH = ".repro"
load(vim.fn.system("curl -s https://raw.githubusercontent.com/folke/lazy.nvim/main/bootstrap.lua"))()

require("lazy.minit").repro({
  spec = {
    return {
  "yetone/avante.nvim",
  event = "VeryLazy",
  version = false, -- Never set this value to "*"! Never!
  opts = {
    -- add any opts here
    -- for example
    provider = "gemini",
    auto_suggestions_provider = "openrouterquick",
    -- cursor_applying_provider = "groq",
    web_search_engine = {
      provider = "tavily", -- tavily, serpapi, searchapi, google or kagi
    },
    mappings = {
      ask = "<leader>ak", -- ask
      edit = "<leader>ae", -- edit
      -- refresh
    },
    system_prompt = function()
      local hub = require("mcphub").get_hub_instance()
      return hub:get_active_servers_prompt()
    end,
    -- Add your custom_tools function
    custom_tools = function()
      return {
        require("mcphub.extensions.avante").mcp_tool(),
      }
    end,
    azure = {
      endpoint = "https://models.inference.ai.azure.com", -- example: "https://<your-resource-name>.openai.azure.com"
      deployment = "", -- Azure deployment name (e.g., "gpt-4o", "my-gpt-4o-deployment")
      api_version = "2024-12-01-preview",
      timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      temperature = 0,
      api_key_name = "GITHUB_TOKEN",
      model = "chatgpt",
      -- Increase this to include reasoning tokens (for reasoning models)
      reasoning_effort = "high", -- low|medium|high, only used for reasoning models
    },
    openai = {
      endpoint = "https://api.openai.com/v1",
      model = "o3-mini-high", -- your desired model (or use gpt-4o, etc.)
      timeout = 600000, -- Timeout in milliseconds, increase this for reasoning models
      temperature = 0,
      api_key_name = "OPENAI_API_KEY",
      --
      max_tokens = 16384, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    claude = {
      --  endpoint = "https://api.openai.com/v1",
      --  model = "gpt-4o", -- your desired model (or use gpt-4o, etc.)
      --  timeout = 3000:0, -- Timeout in milliseconds, increase this for reasoning models
      --  temperature = 0,
      api_key_name = "ANTHROPIC_API_KEY",
      --  max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    groq = {
      api_key_name = "GROQ_API_KEY",
      endpoint = "https://api.groq.com/openai/v1/",
      model = "llama-3.3-70b-versatile",
      max_completion_tokens = 128000,
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    openrouterthink = {
      -- endpoint = "https://api.openai.com/v1",
      model = "deepseek/deepseek-r1:free", -- your desired model (or use gpt-4o, etc.)
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      timeout = 600000,
      api_key_name = "OPENROUTER_API_KEY",
      max_completion_tokens = 64000,
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    copilot = {
      -- endpoint = "https://api.openai.com/v1",
      model = "claude-3.7-sonnet", -- your desired model (or use gpt-4o, etc.)
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    copilotthink = {
      -- endpoint = "https://api.openai.com/v1",
      model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
      timeout = 600000,
      max_completion_tokens = 64000,
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
      reasoning_effort = "high", -- low|medium|high, only used for reasoning models
    },
    gemini = {
      -- endpoint = "https://api.openai.com/v1",
      model = "gemini-2.5-pro-exp-03-25", -- your desired model (or use gpt-4o, etc.)
      -- disabled_tools = { "str_replace" },
      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },
    openrouterquick = {
      -- endpoint = "https://api.openai.com/v1",
      model = "deepseek/deepseek-chat-v3-0324:free", -- your desired model (or use gpt-4o, etc.)

      -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
      -- temperature = 0,
      -- api_key_name = "OPENROUTER_API_KEY",
      max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
      --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
    },

    vendors = {
      copilotthink = {
        __inherited_from = "copilot",

        model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
        max_completion_tokens = 128000,
      },
      groq = {
        __inherited_from = "openai",
        api_key_name = "GROQ_API_KEY",
        endpoint = "https://api.groq.com/openai/v1/",
        model = "llama-3.3-70b-versatile",
        max_completion_tokens = 128000,
      },
      openrouterthink = {
        __inherited_from = "openai",
        endpoint = "https://openrouter.ai/api/v1",
        api_key_name = "OPENROUTER_API_KEY",
        model = "deepseek/deepseek-r1:free",
        max_completion_tokens = 128000,
      },
      openrouterquick = {
        __inherited_from = "openai",
        endpoint = "https://openrouter.ai/api/v1",
        api_key_name = "OPENROUTER_API_KEY",
        model = "deepseek/deepseek-chat-v3-0324:free",
        max_completion_tokens = 128000,
      },

      embed_model = "granite-embedding:30m", -- The embedding model to use for RAG service
      endpoint = "http://localhost:11434", -- The API endpoint for RAG service
      docker_extra_args = "", -- Extra arguments to pass to the docker command
    },
    dual_boost = {
      enabled = false,
      first_provider = "openrouterthink",
      second_provider = "copilotthink",
      prompt = "Based on the two reference outputs below, generate a response that incorporates elements from both but reflects your own judgment and unique perspective. Do not provide any explanation, just give the response directly. Reference Output 1: [{{provider1_output}}], Reference Output 2: [{{provider2_output}}]",
      timeout = 60000, -- Timeout in milliseconds
    },
    behaviour = {
      auto_focus_sidebar = true,
      auto_suggestions = true, -- Experimental stage
      auto_suggestions_respect_ignore = true,
      auto_set_highlight_group = true,
      auto_set_keymaps = true,
      auto_apply_diff_after_generation = false,
      jump_result_buffer_on_finish = false,
      support_paste_from_clipboard = true,
      minimize_diff = true,
      enable_token_counting = true,
      enable_cursor_planning_mode = false,
      enable_claude_text_editor_tool_mode = false,
      use_cwd_as_project_root = false,
    },
    file_selector = {
      --- @alias FileSelectorProvider "native" | "fzf" | "mini.pick" | "snacks" | "telescope" | string | fun(params: avante.file_selector.IParams|nil): nil
      provider = "fzf",
      -- Options override for custom providers
      provider_opts = {},
    },
  },
  -- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
  build = "make",
  -- build = "powershell -ExecutionPolicy Bypass -File Build.ps1 -BuildFromSource false" -- for windows
  dependencies = {
    "nvim-treesitter/nvim-treesitter",
    "stevearc/dressing.nvim",
    "nvim-lua/plenary.nvim",
    "MunifTanjim/nui.nvim",
    "ravitemer/mcphub.nvim",
    --- The below dependencies are optional,
    "echasnovski/mini.pick", -- for file_selector provider mini.pick
    "nvim-telescope/telescope.nvim", -- for file_selector provider telescope
    "hrsh7th/nvim-cmp", -- autocompletion for avante commands and mentions
    "ibhagwan/fzf-lua", -- for file_selector provider fzf
    "nvim-tree/nvim-web-devicons", -- or echasnovski/mini.icons
    "zbirenbaum/copilot.lua", -- for providers='copilot'
    {
      -- support for image pasting
      "HakonHarnes/img-clip.nvim",
      event = "VeryLazy",
      opts = {
        -- recommended settings
        default = {
          embed_image_as_base64 = false,
          prompt_for_file_name = false,
          drag_and_drop = {
            insert_mode = true,
          },
          -- required for Windows users
          use_absolute_path = true,
        },
      },
    },
  },
})


idk how to make repro.lua work

twilwa avatar Apr 06 '25 20:04 twilwa

figured out repro

Image

-- Set custom lazy.nvim path
vim.env.LAZY_STDPATH = ".repro"

-- Bootstrap lazy.nvim
local lazypath = vim.fn.stdpath("data") .. "/lazy/lazy.nvim"
if not vim.loop.fs_stat(lazypath) then
  vim.fn.system({
    "git",
    "clone",
    "--filter=blob:none",
    "https://github.com/folke/lazy.nvim.git",
    "--branch=stable",
    lazypath,
  })
end
vim.opt.rtp:prepend(lazypath)

-- Initialize lazy.nvim with your configuration
require("lazy").setup({
  {
    "yetone/avante.nvim",
    event = "VeryLazy",
    version = false, -- Never set this value to "*"! Never!
    opts = {
      -- add any opts here
      -- for example
      provider = "gemini",
      auto_suggestions_provider = "openrouterquick",
      -- cursor_applying_provider = "groq",
      web_search_engine = {
        provider = "tavily", -- tavily, serpapi, searchapi, google or kagi
      },
      mappings = {
        ask = "<leader>ak", -- ask
        edit = "<leader>ae", -- edit
        -- refresh
      },
      azure = {
        endpoint = "https://models.inference.ai.azure.com", -- example: "https://<your-resource-name>.openai.azure.com"
        deployment = "", -- Azure deployment name (e.g., "gpt-4o", "my-gpt-4o-deployment")
        api_version = "2024-12-01-preview",
        timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
        temperature = 0,
        api_key_name = "GITHUB_TOKEN",
        model = "chatgpt",
        -- Increase this to include reasoning tokens (for reasoning models)
        reasoning_effort = "high", -- low|medium|high, only used for reasoning models
      },
      openai = {
        endpoint = "https://api.openai.com/v1",
        model = "o3-mini-high", -- your desired model (or use gpt-4o, etc.)
        timeout = 600000, -- Timeout in milliseconds, increase this for reasoning models
        temperature = 0,
        api_key_name = "OPENAI_API_KEY",
        --
        max_tokens = 16384, -- Increase this to include reasoning tokens (for reasoning models)
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      claude = {
        --  endpoint = "https://api.openai.com/v1",
        --  model = "gpt-4o", -- your desired model (or use gpt-4o, etc.)
        --  timeout = 3000:0, -- Timeout in milliseconds, increase this for reasoning models
        --  temperature = 0,
        api_key_name = "ANTHROPIC_API_KEY",
        --  max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      groq = {
        api_key_name = "GROQ_API_KEY",
        endpoint = "https://api.groq.com/openai/v1/",
        model = "llama-3.3-70b-versatile",
        max_completion_tokens = 128000,
        -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      openrouterthink = {
        -- endpoint = "https://api.openai.com/v1",
        model = "deepseek/deepseek-r1:free", -- your desired model (or use gpt-4o, etc.)
        -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
        -- temperature = 0,
        timeout = 600000,
        api_key_name = "OPENROUTER_API_KEY",
        max_completion_tokens = 64000,
        -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      copilot = {
        -- endpoint = "https://api.openai.com/v1",
        model = "claude-3.7-sonnet", -- your desired model (or use gpt-4o, etc.)
        -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
        -- temperature = 0,
        -- api_key_name = "OPENROUTER_API_KEY",
        -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      copilotthink = {
        -- endpoint = "https://api.openai.com/v1",
        model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
        timeout = 600000,
        max_completion_tokens = 64000,
        -- temperature = 0,
        -- api_key_name = "OPENROUTER_API_KEY",
        -- max_completion_tokens = 8192, -- Increase this to include reasoning tokens (for reasoning models)
        reasoning_effort = "high", -- low|medium|high, only used for reasoning models
      },
      gemini = {
        -- endpoint = "https://api.openai.com/v1",
        model = "gemini-2.5-pro-exp-03-25", -- your desired model (or use gpt-4o, etc.)
        -- disabled_tools = { "str_replace" },
        -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
        -- temperature = 0,
        -- api_key_name = "OPENROUTER_API_KEY",
        max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },
      openrouterquick = {
        -- endpoint = "https://api.openai.com/v1",
        model = "deepseek/deepseek-chat-v3-0324:free", -- your desired model (or use gpt-4o, etc.)

        -- timeout = 30000, -- Timeout in milliseconds, increase this for reasoning models
        -- temperature = 0,
        -- api_key_name = "OPENROUTER_API_KEY",
        max_tokens = 64000, -- Increase this to include reasoning tokens (for reasoning models
        --reasoning_effort = "medium", -- low|medium|high, only used for reasoning models
      },

      vendors = {
        copilotthink = {
          __inherited_from = "copilot",

          model = "claude-3.7-sonnet-thought", -- your desired model (or use gpt-4o, etc.)
          max_completion_tokens = 128000,
        },
        groq = {
          __inherited_from = "openai",
          api_key_name = "GROQ_API_KEY",
          endpoint = "https://api.groq.com/openai/v1/",
          model = "llama-3.3-70b-versatile",
          max_completion_tokens = 128000,
        },
        openrouterthink = {
          __inherited_from = "openai",
          endpoint = "https://openrouter.ai/api/v1",
          api_key_name = "OPENROUTER_API_KEY",
          model = "deepseek/deepseek-r1:free",
          max_completion_tokens = 128000,
        },
        openrouterquick = {
          __inherited_from = "openai",
          endpoint = "https://openrouter.ai/api/v1",
          api_key_name = "OPENROUTER_API_KEY",
          model = "deepseek/deepseek-chat-v3-0324:free",
          max_completion_tokens = 128000,
        },

        embed_model = "granite-embedding:30m", -- The embedding model to use for RAG service
        endpoint = "http://localhost:11434", -- The API endpoint for RAG service
        docker_extra_args = "", -- Extra arguments to pass to the docker command
      },
      dual_boost = {
        enabled = false,
        first_provider = "openrouterthink",
        second_provider = "copilotthink",
        prompt = "Based on the two reference outputs below, generate a response that incorporates elements from both but reflects your own judgment and unique perspective. Do not provide any explanation, just give the response directly. Reference Output 1: [{{provider1_output}}], Reference Output 2: [{{provider2_output}}]",
        timeout = 60000, -- Timeout in milliseconds
      },
      behaviour = {
        auto_focus_sidebar = true,
        auto_suggestions = true, -- Experimental stage
        auto_suggestions_respect_ignore = true,
        auto_set_highlight_group = true,
        auto_set_keymaps = true,
        auto_apply_diff_after_generation = false,
        jump_result_buffer_on_finish = false,
        support_paste_from_clipboard = true,
        minimize_diff = true,
        enable_token_counting = true,
        enable_cursor_planning_mode = false,
        enable_claude_text_editor_tool_mode = false,
        use_cwd_as_project_root = false,
      },
      file_selector = {
        --- @alias FileSelectorProvider "native" | "fzf" | "mini.pick" | "snacks" | "telescope" | string | fun(params: avante.file_selector.IParams|nil): nil
        provider = "fzf",
        -- Options override for custom providers
        provider_opts = {},
      },
    },
    -- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
    build = "make",
    -- build = "powershell -ExecutionPolicy Bypass -File Build.ps1 -BuildFromSource false" -- for windows
    dependencies = {
      "nvim-treesitter/nvim-treesitter",
      "stevearc/dressing.nvim",
      "nvim-lua/plenary.nvim",
      "MunifTanjim/nui.nvim",
      "ravitemer/mcphub.nvim",
      --- The below dependencies are optional,
      "echasnovski/mini.pick", -- for file_selector provider mini.pick
      "nvim-telescope/telescope.nvim", -- for file_selector provider telescope
      "hrsh7th/nvim-cmp", -- autocompletion for avante commands and mentions
      "ibhagwan/fzf-lua", -- for file_selector provider fzf
      "nvim-tree/nvim-web-devicons", -- or echasnovski/mini.icons
      "zbirenbaum/copilot.lua", -- for providers='copilot'
      {
        -- support for image pasting
        "HakonHarnes/img-clip.nvim",
        event = "VeryLazy",
        opts = {
          -- recommended settings
          default = {
            embed_image_as_base64 = false,
            prompt_for_file_name = false,
            drag_and_drop = {
              insert_mode = true,
            },
            -- required for Windows users
            use_absolute_path = true,
          },
        },
      },
    },
  },
})

edit, removing mcphub allows edits to go through

twilwa avatar Apr 06 '25 20:04 twilwa

‌‌‌‌‌‌‌Looking at the error screenshot, it seems that there is an issue with the custom_system_prompt function in your configuration, unrelated to avante.

yetone avatar Apr 08 '25 03:04 yetone