-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
attempt to index local metadata (a nil value) #45
Comments
This is due to compatibility issues with the metadata feature I developed last month. I will take a look and fix it later today.
Plug 'magicalne/nvim.ai', { 'tag': '*' } This will point to tag v0.1.0, which is the older version. If you still want to stick with the main branch, then take a look at option 2 and 3.
---
provider: deepseek
model: deepseek-chat
temperature: 0
system_prompt: You are an AI programming assistant.
--- Replace the provider and model with your own. You can overwrite these parameters directly in the chat right now.
|
You know you can overwrite system prompt.
Then you can ask questions as usual. |
I have a better idea. I can create a new command, such as /file or /url. This command would allow users to import a customized prompt from a file path or a URL. |
Good idea! It would be nice to be able to add documentation to the prompt without overriding the prompt entirely. |
Is there any error messages? Could you please copy/paste all your content in the chat? |
No error messages - It just saves the chat. Do you want me to paste the chat here or what did you mean? |
Yes, just paste your chat content here. So I can try to reproduce. |
provider: anthropic
|
Ok. A quick debug. Please delete content below:
The first The last valid message of
Please try to send this request again. Let's see what happens. |
So sometimes it works? I cannot test anthropic right now. Are you sure it's not a network issue? local ai = require('ai')
ai.setup({
debug=true,
...}) That will give you more information from provider. |
Yes, this is correct. |
Damn. I should process this situation. |
I'm not so sure about this. I was able to send this document before. |
I think we can estimate the size of your prompt file. Have you updated llms-full.txt? How many words in this file? |
I just used a much less detailed version of the docs which came up to less than half of the tokens and the prompt sent. Maybe the full version didn't send properly that time since I kept getting outdated and inaccurate answers that went directly against the contents of the docs. |
Alright. I just add a new feature. Now the http error message will be showed in the |
|
Do you have two |
I cannot reproduce. 😢 |
provider: anthropic
|
I cannot run with your file. But it raises another error. |
And make sure you are referencing the right buffer number.
|
are you sure 29 is the right bufnr? |
It was when I sent it. I thought that I only needed to send it once for the ai to read it and store the information within the chat's context. Do I actually need to keep the llms-minimal.txt buffer open throughout the entire conversation? |
yes you need to keep it open. the command is just a placeholder. it doesn't copy the content. |
I have been getting the following error very often recently when trying to send messages to the llm. Clearing chat and restarting vim doesn't solve the issue.
The text was updated successfully, but these errors were encountered: