Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Other LLMS? #3

Open
LostFool opened this issue Mar 31, 2024 · 16 comments
Open

Other LLMS? #3

LostFool opened this issue Mar 31, 2024 · 16 comments
Labels
enhancement New feature or request

Comments

@LostFool
Copy link

Can we use other APIs and LLMs?
I'm trying to not get stuck in the OpenAI world.

@lucagrippa
Copy link
Owner

What other LLM providers were you looking to use? Or do you want to be able to use your own locally hosted model?

@LostFool
Copy link
Author

LostFool commented Apr 1, 2024 via email

@ShriyanshCode
Copy link

Try out ollama, it runs locally and has a bunch of libraries. Mistral and Meta 7b api via hugging face is pretty decent

@LostFool
Copy link
Author

LostFool commented Apr 5, 2024 via email

@lucagrippa
Copy link
Owner

lucagrippa commented Apr 5, 2024

Going to try to add support for

  • anthropic
  • mistral ai
  • groq

Would like to look into supporting locally hosted models after that.

Thank you for all the feedback!

@lucagrippa
Copy link
Owner

The latest release comes with support for MistralAI Small and Large models. Will hopefully be adding support for Groq and Anthropic soon

@lucagrippa lucagrippa added the enhancement New feature or request label Apr 30, 2024
@Dr-DW
Copy link

Dr-DW commented Apr 30, 2024

This plugin would be perfect for locally hosted. I don't feel like tagging is a complex gpt task. Info in --> minimal summary output, a perfect task for it. Therefore, even weak models should do an okay job and can easily be cleaned up after. Elimination of cost and privacy prohibition .

@lucagrippa
Copy link
Owner

Just added support for GPT-4o

@sergioatp
Copy link

Can you please provide support for GPT-4o-mini is super cheap and fast!!
https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/

Thank You

@lucagrippa
Copy link
Owner

Just added support for GPT-4o mini :), sorry for the hold up

@mdlmarkham
Copy link

I'm running Ollama locally, and would really like to be able to use it as a back end. I can put the URL into the custom base URL field, it looks like all you would need to do is pull my list of models (Ollama is compatible with the OpenAI API)

@lucagrippa
Copy link
Owner

New models were added in the most recent update and support for Ollama as well. Looks like there are some bugs with Ollama but I will get those all ironed out within the next few days

@misdelivery
Copy link

Can you please provide support for gemini models. They have free tier. https://ai.google.dev/pricing

@zhinn0
Copy link

zhinn0 commented Dec 2, 2024

Please add support for xAI's Grok; Grok-1 is self-hostable (can DL off Github) and X subscribers get access to the newer Grok.

@lucagrippa
Copy link
Owner

@misdelivery Gemini models have been added in the latest release (1.2.4)

@Asherathe
Copy link

Maybe consider compatibility with llama.cpp, kobold.cpp, and tabbyAPI as backends.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

9 participants