Replacing OpenAI GPT-4 with Ollama as LLM-as-a-Judge and API Calls with Local LLMs in Giskard (RAGET Toolkit) AND Replacing API Calls with Local LLMs in Giskard Using Ollama #2096
Labels
question
Further information is requested
Checklist
issues
.❓ Question
I am currently using Giskard, specifically the RAGET toolkit, for evaluating our chatbot. By default, Giskard uses GPT-4 from OpenAI to evaluate the output of our model. However, I would like to replace GPT-4 with an open-source LLM-as-a-judge, specifically Ollama. I have already set up the Ollama client using below code (The one mentioned in the Giskard document).
import giskard
api_base = "http://localhost:11434" # Default api_base for local Ollama
giskard.llm.set_llm_model("ollama/llama3.1", disable_structured_output=True, api_base=api_base)
giskard.llm.set_embedding_model("ollama/nomic-embed-text", api_base=api_base)
Additionally, for confidentiality reasons, I want to replace the default LLM API calls (which use remote LLMs) with local LLMs (with Ollama call). I have set up the Ollama client locally (as shown above) and would like to know if this setup will replace all external LLM API calls with local LLMs, wherever Giskard relies on an external LLM.
Below are my questions:
I know the answer to the second question will also address the first one, but I would still like to ask the first one specifically 😄
The text was updated successfully, but these errors were encountered: