You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description
This issue seems to occur when outputs from tool functions are mapped by the LLMAgent's chat method. Specifically while generating a chat template for the LLM, and maintaining chat history between tool function calls.
This issue, as far as I know, is caused when using Gemini with LLMAgent.
For further reference, I found something that was discussed that may be relevant to this which is already merged in PR: Comment By parhammmm on PR # 822 where he states that an issue occurred where "role = assistant was being mapped to Gemini's user, causing the entire history to be merged into one"
To Reproduce
A basic example involves using the following tools:
// pool: DB connection pool configured using mysql2constviewAllTables=FunctionTool.from(async({ dummy }: {dummy: string}): Promise<JSONValue>=>{constquery="SHOW TABLES";returnpool.promise().query(query).then(([rows])=>{if(Array.isArray(rows)){constlistString: string=rows.map((row: any): string=>row[`Tables_in_${db_name}`]||"").join(", ");returnlistString;}else{thrownewError("rows not in correct format");}}).catch((error: unknown)=>{if(errorinstanceofError){returnerror.message;}return"Unknown Error Occurred";});},{name: "viewAllTables",description: "Use this function to get table names from database.",parameters: {type: "object",properties: {dummy: {type: "string",description:
"This is a placeholder parameter and will be ignored",},},required: ["dummy"],},});constapplyQuery=FunctionTool.from(async({ query }: {query: string}): Promise<JSONValue>=>{console.log({ query });returnpool.promise().query(query).then(([rows,fields])=>{console.log({ rows });console.log({ fields });console.log({stringified: JSON.stringify({ rows, fields })});return`1500`;}).catch((error: unknown)=>{console.log(error);if(errorinstanceofError){returnerror.message;}return"Unknown Error Occurred";});},{name: "applyQuery",description:
"Use this function to apply any SQL query on the pre-defined database.",parameters: {type: "object",properties: {query: {type: "string",description: "Enter Syntax-Correct SQL Query.",},},required: ["query"],},});
Next, I simply used the tools and Gemini API as per the docs:
constgemini=newGemini({model: GEMINI_MODEL.GEMINI_PRO_1_5_FLASH_LATEST,session: newGeminiSession({apiKey: process.env.GOOGLE_API_KEY}),});constagent=newLLMAgent({llm: gemini,tools: [viewAllTables,applyQuery],});// chat with agent:constresponse=awaitagent.chat({message:
"Which tables are in the database? Then count the rows of online_charity table",});// error handling below ...
Expected behaviour
Message should be provided by Agent, answering the relevant query, or provide feedback if it was not able to find sufficient information from tools. Everytime chat is invoked, the message output should be correctly present in response.
Detailed Error
GoogleGenerativeAIError: [GoogleGenerativeAIError]: Contentwithrole'user'can't contain 'functionResponse' part
atvalidateChatHistory(../node_modules/llamaindex/node_modules/@google/generative-ai/dist/index.js:1047:23)atnewChatSession(../node_modules/llamaindex/node_modules/@google/generative-ai/dist/index.js:1089:13)atGenerativeModel.startChat(/node_modules/llamaindex/node_modules/@google/generative-ai/dist/index.js:1337:16)atGemini.nonStreamChat(/node_modules/llamaindex/dist/cjs/llm/gemini/base.js:570:29)atGemini.chat(/node_modules/llamaindex/dist/cjs/llm/gemini/base.js:617:21)atGemini.withLLMEvent(/node_modules/@llamaindex/core/decorator/dist/index.cjs:75:47)atdefaultTaskHandler(/node_modules/@llamaindex/core/agent/dist/index.cjs:738:44)atprocessTicksAndRejections(node:internal/process/task_queues:95:5)atasyncObject.pull(//node_modules/@llamaindex/core/agent/dist/index.cjs:634:13)
Environement
OS: macOS
Llama-Index TS / Express (Node) TS
Node.js version: 20.11.0
llamaindex version in package-json: ^0.8.23
Additional Context
This issue often occurs but not always, leading me to believe that the chat history is not being correctly formatted into chat template with correct roles for Gemini.
The text was updated successfully, but these errors were encountered:
Description
This issue seems to occur when outputs from tool functions are mapped by the LLMAgent's chat method. Specifically while generating a chat template for the LLM, and maintaining chat history between tool function calls.
This issue, as far as I know, is caused when using
Gemini
withLLMAgent
.For further reference, I found something that was discussed that may be relevant to this which is already merged in PR:
Comment By parhammmm on PR # 822 where he states that an issue occurred where "role = assistant was being mapped to Gemini's user, causing the entire history to be merged into one"
To Reproduce
A basic example involves using the following tools:
Next, I simply used the tools and Gemini API as per the docs:
Expected behaviour
Message should be provided by Agent, answering the relevant query, or provide feedback if it was not able to find sufficient information from tools. Everytime
chat
is invoked, the message output should be correctly present in response.Detailed Error
Environement
20.11.0
package-json
:^0.8.23
Additional Context
This issue often occurs but not always, leading me to believe that the chat history is not being correctly formatted into chat template with correct roles for Gemini.
The text was updated successfully, but these errors were encountered: