Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replacing OpenAI GPT-4 with Ollama as LLM-as-a-Judge and API Calls with Local LLMs in Giskard (RAGET Toolkit) AND Replacing API Calls with Local LLMs in Giskard Using Ollama #2096

Open
1 task done
pds13193 opened this issue Jan 10, 2025 · 0 comments
Labels
question Further information is requested

Comments

@pds13193
Copy link

pds13193 commented Jan 10, 2025

Checklist

  • I've searched the project's issues.

❓ Question

I am currently using Giskard, specifically the RAGET toolkit, for evaluating our chatbot. By default, Giskard uses GPT-4 from OpenAI to evaluate the output of our model. However, I would like to replace GPT-4 with an open-source LLM-as-a-judge, specifically Ollama. I have already set up the Ollama client using below code (The one mentioned in the Giskard document).

import giskard
api_base = "http://localhost:11434" # Default api_base for local Ollama
giskard.llm.set_llm_model("ollama/llama3.1", disable_structured_output=True, api_base=api_base)
giskard.llm.set_embedding_model("ollama/nomic-embed-text", api_base=api_base)

Additionally, for confidentiality reasons, I want to replace the default LLM API calls (which use remote LLMs) with local LLMs (with Ollama call). I have set up the Ollama client locally (as shown above) and would like to know if this setup will replace all external LLM API calls with local LLMs, wherever Giskard relies on an external LLM.

Below are my questions:

  1. Once the Ollama client is set up, does it automatically replace OpenAI GPT-4 as the LLM-as-a-judge, or is there additional configuration required?
  2. Will the Ollama client setup replace all external API calls and use the local LLM instead? If not, are there additional configurations needed to ensure only local LLMs are used for all relevant tasks?

I know the answer to the second question will also address the first one, but I would still like to ask the first one specifically 😄

@pds13193 pds13193 added the question Further information is requested label Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Development

No branches or pull requests

1 participant