langgraph checkpoint #29
-
very nice work for langgraph agents! One thing I nocitced that the checkpoint doesn't seem to work as expected. We lose the context between multiple questions. Does somebody else have the same problem? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi @wenliangz thank you! I just want to make sure you’re sending the same thread_id for subsequent requests and keeping the server / container alive? I haven’t had trouble with this recently (you can see the deployed agent linked in the readme maintains context within the same chat, which uses the checkpoint mechanism) Feel free to post more info about your setup or repro steps if useful and happy to see if I can help |
Beta Was this translation helpful? Give feedback.
Good question - the code you referenced (from here) is used to send a conversation summary to LlamaGuard only for content moderation BEFORE the actual agent evaluation, so it shouldn't be impacting the result. Arguably, it's overkill and it would be OK to just send the last message here.
For the "main" agent evaluation, note that conversation history is stored in
state["messages"]
and when you use an existing thread_id, this value is pre-populated with the earlier messages. You are still responsible for taking those messages and sending them to the LLM - this is how LangGraph works, not specific to this repo. In my example agent, that part happens here.