Skip to content

Inspired by Zed AI, it allows you to chat with your buffers, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.

License

Notifications You must be signed in to change notification settings

magicalne/nvim.ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

nvim.ai

nvim.ai is a powerful Neovim plugin that brings AI-assisted coding and chat capabilities directly into your favorite editor. Inspired by Zed AI, it allows you to chat with your buffers, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.

Chat with buffers

nvimai_chat_with_buffers.mp4

Inline Assist

nvimai_inline_assist_insert_code.mp4

Set up context and ask LLM to generate code. Use inline assist to insert/rewrite the code.

/diagnostics

nvimai_slash_diagnostics.mp4

Set up context with diagnostics from LSP.

Features

  • 🤖 Chat with buffers: Interact with AI about your code and documents
  • 🧠 Context-aware AI assistance: Get relevant help based on your current work
  • 📝 Inline assistant: Code insertion and rewriting
  • 🛠️Slash Commands:
    • /buf with bufnr
    • /diagnostics with bufnr
  • 🌐 Multiple LLM provider support:
    • Ollama (local)
    • Anthropic
    • Deepseek
    • Cohere
    • Gemini
    • Mistral
    • Groq
    • Sambanova
    • Hyperbolic
    • Cerebras
    • OpenAI (not tested)

Install

vim-plug

Plug 'nvim-treesitter/nvim-treesitter', {'do': ':TSUpdate'}
Plug 'nvim-lua/plenary.nvim'
Plug 'magicalne/nvim.ai', { 'tag': '*' }

Lazy

-- Setup lazy.nvim
require("lazy").setup({
  spec = {
    {"nvim-treesitter/nvim-treesitter", build = ":TSUpdate"}, -- nvim.ai depends on treesitter
    {
      "magicalne/nvim.ai",
      version = "*",
      dependencies = {
        "nvim-lua/plenary.nvim",
        "nvim-treesitter/nvim-treesitter",
      },
      opts = {
        provider = "anthropic", -- You can configure your provider, model or keymaps here.
      }
    },

  },
  -- ...
})

Config

You can find all the config and keymaps from here.

Ollama

local ai = require('ai')
ai.setup({
  provider = "ollama",
  ollama = {
    model = "llama3.1:70b", -- You can start with smaller one like `gemma2` or `llama3.1`
    --endpoint = "http://192.168.2.47:11434", -- In case you access ollama from another machine
  }
})

Others

Add you api keys to your dotfile

I put my keys in ~/.config/.env and source it in my .zshrc.

export ANTHROPIC_API_KEY=""
export CO_API_KEY=""
export GROQ_API_KEY=""
export DEEPSEEK_API_KEY=""
export MISTRAL_API_KEY=""
export GOOGLE_API_KEY=""
export HYPERBOLIC_API_KEY=""
export OPENROUTER_API_KEY=""
export FAST_API_KEY=""
export CEREBRAS_API_KEY=""
local ai = require('ai')
ai.setup({
  --provider = "snova",
  --provider = "hyperbolic",
  --provider = "cerebras",
  --provider = "gemini",
  --provider = "mistral",
  provider = "anthropic",
  --provider = "deepseek",
  --provider = "groq",
  --provider = "cohere",
})

OpenAI compatible API

Local LLM like llamacpp and koboldcpp

local ai = require('ai')
ai.setup({
  provider = "openai",
  openai = {
    ["local "] = true,
    model = "llama3.1:70b",
    endpoint = "http://localhost:8080",
  }
})

Default Keymaps

{
  -- ..
  -- Keymaps
  keymaps = {
    toggle          = "<leader>c", -- Toggle chat dialog
    send            = "<CR>",      -- Send message in normal mode
    close           = "q",         -- Close chat dialog
    clear           = "<C-l>",     -- Clear chat history
    stop_generate   = "<C-c>",     -- Stop generating
    previous_chat   = "<leader>[", -- Open previous chat from history
    next_chat       = "<leader>]", -- Open next chat from history
    inline_assist   = "<leader>i", -- Run InlineAssist command with prompt
  },
}

Chat

  • Leaderc -- Toggle chat
  • Leader[ -- Open previous chat
  • Leader] -- Open next chat
  • q -- Close chat
  • Enter -- Send message in normal mode
  • Controll -- Clear chat history
  • Controlc -- Stop generating

Inline Assist

  • Leaderi — Insert code in normal mode with prompt, or rewrite section with in visual/selection mode.

Usage

Chat

The chat dialog is a special buffer. nvim.ai will parse the content with keywords. There are 3 roles in the buffer:

  • /system: You can overwrite the system prompt by inserting /system your_system_prompt in the first line.
  • /you: Lines after this are your prompt.
    • You can add buffers with /buf {bufnr}
    • Once you finish your prompt, you can send the request by pressing Enter in normal mode.
  • /assistant: The streaming content from LLM will appear below this line. Since the chat dialog is just a buffer, you can edit anything in it. Be aware that only the last block of /you will be treated as the prompt. Just like Zed AI, this feature is called "chat with context." You can edit the last prompt if you don't like the response, and you can do this back and forth.

Here is an example:

/system You are an expert on lua and neovim plugin development.

/you

/buf 1: init.lua

How to blablabla?

/assistance:
...

Context-Aware Assistance

Inline Assist

By pressing leaderi and typing your instruction, you can insert or rewrite a code block anywhere in the current file. Note that the inline assist can read the chat messages in the sidebar. Therefore, you can ask the LLM about your code and instruct it to generate a new function. Then, you can insert this new function by running inline assist with the prompt: Insert the function.

Workflow with nvim.ai

The new way of working with nvim.ai is:

  • Build context by chatting with the LLM.
  • Ask the LLM to generate code.
  • Apply the changes from the last chat using inline assist.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

Acknowledgements

This project is inspired by:

License

nvim.ai is licensed under the Apache License. For more details, please refer to the LICENSE file.


⚠️ Note: This plugin is under active development. Features and usage may change.

About

Inspired by Zed AI, it allows you to chat with your buffers, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages