docs/tutorials/chatbot/ #27304
Replies: 9 comments 6 replies
-
Very helpful explanation. |
Beta Was this translation helpful? Give feedback.
-
Explaination made it look easy! Thanks to community. |
Beta Was this translation helpful? Give feedback.
-
Hey guys is this safe to run on my MAC Desktop ? |
Beta Was this translation helpful? Give feedback.
-
Loved the easy to understand explanation. |
Beta Was this translation helpful? Give feedback.
-
Suppose I have a FastAPI backend where this chatbot code is present, the frontend is in react, and the chatbot uses the app.stream method for response streaming. In this case, do I use web-socket to enable streaming of AI response from the FastAPI backend to the React frontend? |
Beta Was this translation helpful? Give feedback.
-
Followed your exact tutorial, but when making the conversion to NotImplementedError: Unsupported message type: <class 'coroutine'>
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE Below is the example of the code that throws it: from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, MessagesState, StateGraph
# Define a new graph
workflow = StateGraph(state_schema=MessagesState)
# Define the function that calls the model
async def call_model(state: MessagesState):
response = model.ainvoke(state["messages"])
return {"messages": response}
# Define the (single) node in the graph
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)
# Add memory
memory = MemorySaver()
app = workflow.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "abc123"}}
query = "Hi! I'm Bob."
input_messages = [HumanMessage(query)]
output = await app.ainvoke({"messages": input_messages}, config)
output["messages"][-1].pretty_print() # output contains all messages in state |
Beta Was this translation helpful? Give feedback.
-
Awesome Tutorial!!! Loved it.
it was adding the message at 0 index of the message list shifting the System message to the 1st index; this constantly gave me the following error: ''' to solve this, I had to change the trimmer configuration and made
then it started working. I guess either the system message needs to be added separately or add the HumanMessage() to the messages, but in some other way. Please see this and let me know. |
Beta Was this translation helpful? Give feedback.
-
Very good documentation, but I've encountered a small problem. In the last example of Streaming, based on the code above, I cannot achieve the effect in the document, that is, streaming output. This is because when defining the model generation above, invoke was used instead of stream. As a result, during the subsequent generation, the output is done in whole blocks, and the "|" only appears once, at the end. # Create a state graph for managing conversation flow
workflow = StateGraph(state_schema=State)
model_OpenAI = ChatOpenAI(model="gpt-4o", api_key=os.getenv("OPENAI_API_KEY"))
def call_model(state: State):
"""Function to generate model responses
Args:
state (State): Current state containing message history and language settings
Returns:
dict: Dictionary containing model response messages
"""
# Use trimmer to trim message history to stay within token limits
trimmed_messages = trimmer.invoke(state["messages"])
# Generate prompt using template with trimmed messages and language setting
prompt = prompt_template.invoke(
{"messages": trimmed_messages, "language": state["language"]}
)
# Call language model to generate response
response = model_OpenAI.invoke(prompt, config={"callbacks": [langfuse_handler]})
# Return model response wrapped in messages list
return {"messages": [response]}
# Add workflow edge connecting START node to model node
workflow.add_edge(START, "model")
# Add model node and bind call_model handler function
workflow.add_node("model", call_model)
# Create memory saver for storing conversation state
memory = MemorySaver()
# Compile workflow with checkpointer for state checkpoints
app = workflow.compile(checkpointer=memory)
config = {"configurable": {"thread_id": "abc789"}}
query = "What company are you a big model for?"
language = "English"
input_messages = [HumanMessage(query)]
for chunk, metadata in app.stream(
{"messages": input_messages, "language": language},
config,
stream_mode="messages",
):
if isinstance(chunk, AIMessage): # Filter to just model responses
print(chunk.content, end="|") > I am a language model created by OpenAI.| But if I switch to # Call language model to generate response
response = model_OpenAI.stream(prompt, config={"callbacks": [langfuse_handler]}) ---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[119], line 6
3 language = "English"
5 input_messages = [HumanMessage(query)]
----> 6 for chunk, metadata in app.stream(
7 {"messages": input_messages, "language": language},
8 config,
9 stream_mode="messages",
10 ):
11 if isinstance(chunk, AIMessage): # Filter to just model responses
12 print(chunk.content, end="|")
File ~/Library/Caches/pypoetry/virtualenvs/9091-ell-6QbIK8Ul-py3.10/lib/python3.10/site-packages/langgraph/pregel/__init__.py:1655, in Pregel.stream(self, input, config, stream_mode, output_keys, interrupt_before, interrupt_after, debug, subgraphs)
1649 get_waiter = None # type: ignore[assignment]
1650 # Similarly to Bulk Synchronous Parallel / Pregel model
1651 # computation proceeds in steps, while there are channel updates
1652 # channel updates from step N are only visible in step N+1
1653 # channels are guaranteed to be immutable for the duration of the step,
1654 # with channel updates applied only at the transition between steps
-> 1655 while loop.tick(input_keys=self.input_channels):
1656 for _ in runner.tick(
1657 loop.tasks.values(),
1658 timeout=self.step_timeout,
(...)
...
--> 333 raise NotImplementedError(msg)
335 return _message
NotImplementedError: Unsupported message type: <class 'generator'>
For troubleshooting, visit: https://python.langchain.com/docs/troubleshooting/errors/MESSAGE_COERCION_FAILURE |
Beta Was this translation helpful? Give feedback.
-
This is my environment. The model is qwen2.5. It is deployed through vllm. When using the examples in the tutorial, the following error is reported. Can anyone who has encountered this problem help me solve it? model = ChatOpenAI(model="Qwen2.5-32B-Instruct-AWQ",
|
Beta Was this translation helpful? Give feedback.
-
docs/tutorials/chatbot/
This guide assumes familiarity with the following concepts:
https://python.langchain.com/docs/tutorials/chatbot/
Beta Was this translation helpful? Give feedback.
All reactions