Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

attempt to index local metadata (a nil value) #45

Open
jakubdonovan opened this issue Jan 7, 2025 · 32 comments
Open

attempt to index local metadata (a nil value) #45

jakubdonovan opened this issue Jan 7, 2025 · 32 comments

Comments

@jakubdonovan
Copy link

I have been getting the following error very often recently when trying to send messages to the llm. Clearing chat and restarting vim doesn't solve the issue.
Screenshot 2025-01-07 at 21 31 42

@magicalne
Copy link
Owner

This is due to compatibility issues with the metadata feature I developed last month. I will take a look and fix it later today.
Right now, there are three ways to resolve your issue:

  1. Use tags in your config:
Plug 'magicalne/nvim.ai', { 'tag': '*' }

This will point to tag v0.1.0, which is the older version.

If you still want to stick with the main branch, then take a look at option 2 and 3.

  1. Add the metadata to the head of your chat:
---
provider: deepseek
model: deepseek-chat
temperature: 0
system_prompt: You are an AI programming assistant.
---

Replace the provider and model with your own. You can overwrite these parameters directly in the chat right now.

  1. Delete your cache files in the nvim.ai data folder:
    You can delete the files from the nvim plugin data folder. Since I use Linux, my data folder is located at: ~/.local/share/nvim/nvim.ai/saved_chats. You can either delete these files or move them to another folder. The chat files will then be recreated automatically.

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 8, 2025

This is due to compatibility issues with the metadata feature I developed last month. I will take a look and fix it later today. Right now, there are three ways to resolve your issue:

  1. Use tags in your config:
Plug 'magicalne/nvim.ai', { 'tag': '*' }

This will point to tag v0.1.0, which is the older version.

If you still want to stick with the main branch, then take a look at option 2 and 3.

  1. Add the metadata to the head of your chat:
---
provider: deepseek
model: deepseek-chat
temperature: 0
system_prompt: You are an AI programming assistant.
---

Replace the provider and model with your own. You can overwrite these parameters directly in the chat right now.

  1. Delete your cache files in the nvim.ai data folder:
    You can delete the files from the nvim plugin data folder. Since I use Linux, my data folder is located at: ~/.local/share/nvim/nvim.ai/saved_chats. You can either delete these files or move them to another folder. The chat files will then be recreated automatically.

Thanks for getting back to me. After clearing all the relevant cache and reinstalling the plugin, I was able to open a chat but for some reason got the entire default prompt.
Screenshot 2025-01-08 at 02 01 23

I have a llms-full.txt containing svelte 5 documentation, which I want the ai to learn so that It can give me higher quality answers when working with svelte projects. I know that I could simply attach the txt buffer whenever I open a new chat, but I'm thinking that it might be better to include it in the default prompt to store it in the global context. How would you approach this?

@magicalne
Copy link
Owner

You know you can overwrite system prompt.
Customizing prompt is bit of tricky. I'm thinking about put customized prompt files under nvim cache folder.
I have a trick may help a little if you don't want to update system prompt. Just insert /you and /assistant blocks before your actual questions in your projects.

{metadata}

/you
{your_long_prompt_about_svelte5}
Do you understand?

/assistant
Yes, I do.

Then you can ask questions as usual.

@magicalne
Copy link
Owner

I have a better idea. I can create a new command, such as /file or /url. This command would allow users to import a customized prompt from a file path or a URL.

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 8, 2025

I have a better idea. I can create a new command, such as /file or /url. This command would allow users to import a customized prompt from a file path or a URL.

Good idea! It would be nice to be able to add documentation to the prompt without overriding the prompt entirely.

@jakubdonovan
Copy link
Author

I have a better idea. I can create a new command, such as /file or /url. This command would allow users to import a customized prompt from a file path or a URL.

I'm having a lot of issues where the ai will not respond to my messages.
Screenshot 2025-01-08 at 11 07 01
Even after deleting the saved_chats folder
Screenshot 2025-01-08 at 11 11 31

@magicalne
Copy link
Owner

Is there any error messages? Could you please copy/paste all your content in the chat?

@jakubdonovan
Copy link
Author

Is there any error messages? Could you please copy/paste all your content in the chat?

No error messages - It just saves the chat. Do you want me to paste the chat here or what did you mean?

@magicalne
Copy link
Owner

Yes, just paste your chat content here. So I can try to reproduce.

@jakubdonovan
Copy link
Author

Yes, just paste your chat content here. So I can try to reproduce.


provider: anthropic
model: claude-3-5-sonnet-20240620
temperature: 0
max_tokens: 4096

/system
You are an Al programming assistant.
You are currently plugged in to the Neovim text editor on a user's machine.

Your tasks include:

  • Answering general programming questions.
  • Explaining how the code in a Neovim buffer works.
  • Reviewing the selected code in a Neovim buffer.
  • Generating unit tests for the selected code.
  • Proposing fixes for problems in the selected code.
  • Scaffolding code for a new workspace.
  • Finding relevant code to the user's query.
  • Proposing fixes for test failures.
  • Answering questions about Neovim.
  • Running tools.

You must:

  • Follow the user's requirements carefully and to the letter.
  • Keep your answers short and impersonal, especially if the user responds with context outside of your tasks.
  • Minimize other prose.
  • Use Markdown formatting in your answers.
  • Include the programming language name at the start of the Markdown code blocks.
  • Avoid line numbers in code blocks.
  • Avoid wrapping the whole response in triple backticks.
  • Only return relevant code.

/you
/buf 21: llms-full.txt
Please carefully review the attached Svelte 5 documentation in detail. Ensure that any responses you provide regarding Svelte 5 are accurate and consistent with the information in this document. Before answering, verify your knowledge against the documentation to avoid any inaccuracies. If needed, cite specific sections or examples from the document to support your answers. Let me know if there’s anything unclear in the documentation or if you require further clarification.

/assistant:

/you:
ping

/assistant:

/you:

@magicalne
Copy link
Owner

Ok. A quick debug. Please delete content below:

/assistant:

/you:
ping

/assistant:

/you:

The first assistant response is empty. I'm not sure why that happens. But this is invalid for sure.

The last valid message of you should be:

/you
/buf 21: llms-full.txt
Please carefully review the attached Svelte 5 documentation in detail. Ensure that any responses you provide regarding Svelte 5 are accurate and consistent with the information in this document. Before answering, verify your knowledge against the documentation to avoid any inaccuracies. If needed, cite specific sections or examples from the document to support your answers. Let me know if there’s anything unclear in the documentation or if you require further clarification.

Please try to send this request again. Let's see what happens.

@jakubdonovan
Copy link
Author

Ok. A quick debug. Please delete content below:

/assistant:

/you:
ping

/assistant:

/you:

The first assistant response is empty. I'm not sure why that happens. But this is invalid for sure.

The last valid message of you should be:

/you
/buf 21: llms-full.txt
Please carefully review the attached Svelte 5 documentation in detail. Ensure that any responses you provide regarding Svelte 5 are accurate and consistent with the information in this document. Before answering, verify your knowledge against the documentation to avoid any inaccuracies. If needed, cite specific sections or examples from the document to support your answers. Let me know if there’s anything unclear in the documentation or if you require further clarification.

Please try to send this request again. Let's see what happens.

Screenshot 2025-01-08 at 11 27 45

This happens randomly. I would probably need to delete all nvim.ai cache and reinstall the plugin to fix it

@magicalne
Copy link
Owner

So sometimes it works? I cannot test anthropic right now. Are you sure it's not a network issue?
You can enable debug mode in config:

local ai = require('ai')
ai.setup({
debug=true,
...})

That will give you more information from provider.

@jakubdonovan
Copy link
Author

So sometimes it works? I cannot test anthropic right now. Are you sure it's not a network issue? You can enable debug mode in config:

local ai = require('ai')
ai.setup({
debug=true,
...})

That will give you more information from provider.

Screenshot 2025-01-08 at 11 38 46 In opts? how do I view these logs?

@magicalne
Copy link
Owner

Yes, this is correct.
There's no logs in nvim. 😢 But it will print something useful in messages. You can check messages by running cmd: :message in nvim after sending request.

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 8, 2025

Yes, this is correct. There's no logs in nvim. 😢 But it will print something useful in messages. You can check messages by running cmd: :message in nvim after sending request.

Apparently the prompt is too long even though I was able to send it before.
Screenshot 2025-01-08 at 11 46 09

@magicalne
Copy link
Owner

magicalne commented Jan 8, 2025

Damn. I should process this situation.
Thanks for reporting!

@jakubdonovan
Copy link
Author

Damn. I should process this situation. Thanks for reporting!

I'm not so sure about this. I was able to send this document before.

@magicalne
Copy link
Owner

I think we can estimate the size of your prompt file. Have you updated llms-full.txt? How many words in this file?

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 8, 2025

I think we can estimate the size of your prompt file. Have you updated llms-full.txt? How many words in this file?

I just used a much less detailed version of the docs which came up to less than half of the tokens and the prompt sent. Maybe the full version didn't send properly that time since I kept getting outdated and inaccurate answers that went directly against the contents of the docs.

@magicalne
Copy link
Owner

Alright. I just add a new feature. Now the http error message will be showed in the assistant block.

@jakubdonovan
Copy link
Author

Alright. I just add a new feature. Now the http error message will be showed in the assistant block.

Unable to send any messages after updating
Screenshot 2025-01-08 at 14 00 17

@magicalne
Copy link
Owner

Do you have two /you blocks consecutively? Looks like it's failed to parse your chat content.

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 8, 2025

Do you have two /you blocks consecutively? Looks like it's failed to parse your chat content.

Nope, just the one underneath the system prompt.
Screenshot 2025-01-08 at 14 40 22

messages nil

@magicalne
Copy link
Owner

I cannot reproduce. 😢
This issue should be related to your chat content. messages nil means it cannot parse metadata, system prompt or messages. Can you upload your chat and llms-minimal.txt? So I can try to reproduce.

@jakubdonovan
Copy link
Author

I cannot reproduce. 😢 This issue should be related to your chat content. messages nil means it cannot parse metadata, system prompt or messages. Can you upload your chat and llms-minimal.txt? So I can try to reproduce.

llms-minimal.txt


provider: anthropic
model: claude-3-5-sonnet-20240620
temperature: 0
max_tokens: 4096

/system You are an Al programming assistant. You are currently plugged in to the
Neovim text editor on a user's machine.

Your tasks include:

  • Answering general programming questions.
  • Explaining how the code in a Neovim buffer works.
  • Reviewing the selected code in a Neovim buffer.
  • Generating unit tests for the selected code.
  • Proposing fixes for problems in the selected code.
  • Scaffolding code for a new workspace.
  • Finding relevant code to the user's query.
  • Proposing fixes for test failures.
  • Answering questions about Neovim.
  • Running tools.

You must:

  • Follow the user's requirements carefully and to the letter.
  • Keep your answers short and impersonal, especially if the user responds with
    context outside of your tasks.
  • Minimize other prose.
  • Use Markdown formatting in your answers.
  • Include the programming language name at the start of the Markdown code
    blocks.
  • Avoid line numbers in code blocks.
  • Avoid wrapping the whole response in triple backticks.
  • Only return relevant code.

/you
/buf 29: llms-minimal.txt
Review the attached documentation in detail. Ensure
that any responses you provide regarding Svelte are accurate and consistent with
the information in this document. Before answering, verify your knowledge
against the documentation to avoid inaccuracies.

/assistant:
I understand. I have carefully reviewed the provided Svelte and SvelteKit documentation. I will ensure that any responses I give regarding Svelte and SvelteKit are accurate and consistent with this documentation. Before answering questions, I will verify my knowledge against this authoritative source to avoid providing any incorrect information. Please feel free to ask me any questions about Svelte or SvelteKit, and I will do my best to provide accurate answers based on this documentation.

/you:

@magicalne
Copy link
Owner

I cannot run with your file. But it raises another error.
I will take a close look at this issue. Meantime could you please try with a simple prompt? It's pretty late in my timezone. I will get back to you tomorrow.

@magicalne
Copy link
Owner

And make sure you are referencing the right buffer number.

/buf 29: llms-minimal.txt

@jakubdonovan
Copy link
Author

jakubdonovan commented Jan 9, 2025

And make sure you are referencing the right buffer number.

/buf 29: llms-minimal.txt

I'm still unable to send messages with the buffer attached. It's under half of the token limit.
Screenshot 2025-01-09 at 05 24 16

After deleting it, I can send messages just fine. Why do you think this happens?
Screenshot 2025-01-09 at 05 26 03

@magicalne
Copy link
Owner

are you sure 29 is the right bufnr?

@jakubdonovan
Copy link
Author

are you sure 29 is the right bufnr?

It was when I sent it. I thought that I only needed to send it once for the ai to read it and store the information within the chat's context. Do I actually need to keep the llms-minimal.txt buffer open throughout the entire conversation?

@magicalne
Copy link
Owner

yes you need to keep it open. the command is just a placeholder. it doesn't copy the content.
the buffer num should be checked before sending requests. I will check that part later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants