-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Other LLMS? #3
Comments
What other LLM providers were you looking to use? Or do you want to be able to use your own locally hosted model? |
For tagging, ideally local (or remotely hosted). But the anthropic or
groq(hardware, not X) is always a consideration.
…On Mon, Apr 1, 2024, 12:33 PM Luca Grippa ***@***.***> wrote:
What other LLM providers were you looking to use? Or do you want to be
able to use your own locally hosted model?
—
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIIPGWOQGCG2IP4BKT4TQ4LY3GK7NAVCNFSM6AAAAABFP6JKEOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGIYTKOBQGI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Try out ollama, it runs locally and has a bunch of libraries. Mistral and Meta 7b api via hugging face is pretty decent |
I've got that one on my list. I also like the librechat interface. My real
problem is time. 50 hour job and a baby = no time. I just watch all this AI
stuff getting anxious wanting to build stuff haha.
…On Fri, Apr 5, 2024, 7:41 AM ShriyanshCode ***@***.***> wrote:
Try out ollama, it runs locally and has a bunch of libraries. Mistral and
Meta 7b api via hugging face is pretty decent
—
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIIPGWILIOYPKBIFJYYBB73Y32LXNAVCNFSM6AAAAABFP6JKEOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZZG4YDOMZYGM>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Going to try to add support for
Would like to look into supporting locally hosted models after that. Thank you for all the feedback! |
The latest release comes with support for MistralAI Small and Large models. Will hopefully be adding support for Groq and Anthropic soon |
This plugin would be perfect for locally hosted. I don't feel like tagging is a complex gpt task. Info in --> minimal summary output, a perfect task for it. Therefore, even weak models should do an okay job and can easily be cleaned up after. Elimination of cost and privacy prohibition . |
Just added support for GPT-4o |
Can you please provide support for GPT-4o-mini is super cheap and fast!! Thank You |
Just added support for GPT-4o mini :), sorry for the hold up |
I'm running Ollama locally, and would really like to be able to use it as a back end. I can put the URL into the custom base URL field, it looks like all you would need to do is pull my list of models (Ollama is compatible with the OpenAI API) |
New models were added in the most recent update and support for Ollama as well. Looks like there are some bugs with Ollama but I will get those all ironed out within the next few days |
Can you please provide support for gemini models. They have free tier. https://ai.google.dev/pricing |
Please add support for xAI's Grok; Grok-1 is self-hostable (can DL off Github) and X subscribers get access to the newer Grok. |
@misdelivery Gemini models have been added in the latest release (1.2.4) |
Maybe consider compatibility with llama.cpp, kobold.cpp, and tabbyAPI as backends. |
Can we use other APIs and LLMs?
I'm trying to not get stuck in the OpenAI world.
The text was updated successfully, but these errors were encountered: