nvim.ai
is a powerful Neovim plugin that brings AI-assisted coding and chat capabilities directly into your favorite editor. Inspired by Zed AI, it allows you to chat with your buffers
, insert code with an inline assistant, and leverage various LLM providers for context-aware AI assistance.
nvimai_chat_with_buffers.mp4
nvimai_inline_assist_insert_code.mp4
Set up context and ask LLM to generate code. Use inline assist to insert/rewrite the code.
nvimai_slash_diagnostics.mp4
Set up context with diagnostics from LSP.
- 🤖 Chat with buffers: Interact with AI about your code and documents
- 🧠 Context-aware AI assistance: Get relevant help based on your current work
- 📝 Inline assistant: Code insertion and rewriting
- 🛠️Slash Commands:
- /buf with bufnr
- /diagnostics with bufnr
- 🌐 Multiple LLM provider support:
- Ollama (local)
- Anthropic
- Deepseek
- Cohere
- Gemini
- Mistral
- Groq
- Sambanova
- Hyperbolic
- Cerebras
- OpenAI (not tested)
Plug 'nvim-treesitter/nvim-treesitter', {'do': ':TSUpdate'}
Plug 'nvim-lua/plenary.nvim'
Plug 'magicalne/nvim.ai', { 'tag': '*' }
-- Setup lazy.nvim
require("lazy").setup({
spec = {
{"nvim-treesitter/nvim-treesitter", build = ":TSUpdate"}, -- nvim.ai depends on treesitter
{
"magicalne/nvim.ai",
version = "*",
dependencies = {
"nvim-lua/plenary.nvim",
"nvim-treesitter/nvim-treesitter",
},
opts = {
provider = "anthropic", -- You can configure your provider, model or keymaps here.
}
},
},
-- ...
})
You can find all the config and keymaps from here.
local ai = require('ai')
ai.setup({
provider = "ollama",
ollama = {
model = "llama3.1:70b", -- You can start with smaller one like `gemma2` or `llama3.1`
--endpoint = "http://192.168.2.47:11434", -- In case you access ollama from another machine
}
})
I put my keys in ~/.config/.env
and source
it in my .zshrc
.
export ANTHROPIC_API_KEY=""
export CO_API_KEY=""
export GROQ_API_KEY=""
export DEEPSEEK_API_KEY=""
export MISTRAL_API_KEY=""
export GOOGLE_API_KEY=""
export HYPERBOLIC_API_KEY=""
export OPENROUTER_API_KEY=""
export FAST_API_KEY=""
export CEREBRAS_API_KEY=""
local ai = require('ai')
ai.setup({
--provider = "snova",
--provider = "hyperbolic",
--provider = "cerebras",
--provider = "gemini",
--provider = "mistral",
provider = "anthropic",
--provider = "deepseek",
--provider = "groq",
--provider = "cohere",
})
local ai = require('ai')
ai.setup({
provider = "openai",
openai = {
["local "] = true,
model = "llama3.1:70b",
endpoint = "http://localhost:8080",
}
})
{
-- ..
-- Keymaps
keymaps = {
toggle = "<leader>c", -- Toggle chat dialog
send = "<CR>", -- Send message in normal mode
close = "q", -- Close chat dialog
clear = "<C-l>", -- Clear chat history
stop_generate = "<C-c>", -- Stop generating
previous_chat = "<leader>[", -- Open previous chat from history
next_chat = "<leader>]", -- Open next chat from history
inline_assist = "<leader>i", -- Run InlineAssist command with prompt
},
}
- Leaderc -- Toggle chat
- Leader[ -- Open previous chat
- Leader] -- Open next chat
- q -- Close chat
- Enter -- Send message in normal mode
- Controll -- Clear chat history
- Controlc -- Stop generating
- Leaderi — Insert code in normal mode with prompt, or rewrite section with in visual/selection mode.
The chat dialog is a special buffer. nvim.ai
will parse the content with keywords. There are 3 roles in the buffer:
- /system: You can overwrite the system prompt by inserting
/system your_system_prompt
in the first line. - /you: Lines after this are your prompt.
- You can add buffers with
/buf {bufnr}
- Once you finish your prompt, you can send the request by pressing
Enter
in normal mode.
- You can add buffers with
- /assistant: The streaming content from LLM will appear below this line.
Since the chat dialog is just a buffer, you can edit anything in it. Be aware that only the last block of
/you
will be treated as the prompt. Just like Zed AI, this feature is called "chat with context." You can edit the last prompt if you don't like the response, and you can do this back and forth.
Here is an example:
/system You are an expert on lua and neovim plugin development.
/you
/buf 1: init.lua
How to blablabla?
/assistance:
...
By pressing leaderi and typing your instruction, you can insert or rewrite a code block anywhere in the current file.
Note that the inline assist
can read the chat messages in the sidebar. Therefore, you can ask the LLM about your code and instruct it to generate a new function. Then, you can insert this new function by running inline assist
with the prompt: Insert the function
.
The new way of working with nvim.ai
is:
- Build context by chatting with the LLM.
- Ask the LLM to generate code.
- Apply the changes from the last chat using
inline assist
.
Contributions are welcome! Please feel free to submit a Pull Request.
This project is inspired by:
nvim.ai is licensed under the Apache License. For more details, please refer to the LICENSE file.