docs/integrations/llms/google_vertex_ai_palm/ #28473
Replies: 1 comment
-
I’ve developed a GenAI-based application that extracts insights from video clips uploaded to Google Cloud Storage. The method involves retrieving files via gsutil and processing them using Gemini Flash 1.5 as the LLM within a complete LangChain prompt-to-parser chain. However, for the past two days, the responses generated have not been relevant to the actual content in the video files. This issue started recently, as the code was working fine since early September. I suspect the problem may lie with the code or LangChain’s built-in functions. they are not able to pass the file to llm |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
docs/integrations/llms/google_vertex_ai_palm/
You are currently on a page documenting the use of Google Vertex text completion models. Many Google models are chat completion models.
https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/
Beta Was this translation helpful? Give feedback.
All reactions