You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder if anyone here has encountered a similar problem or can give a hint in the direction of a solution:
for an agent workflow, I wanted to define a perplexity skill, aka giving a research agent the skill to use the perplexity API to obtain answers to user questions.
import requests
url = "https://api.perplexity.ai/chat/completions"
payload = {
"model": "llama-3.1-sonar-large-128k-online",
"messages": [
{
"role": "system",
"content": "Be precise and concise."
},
{
"role": "user",
"content": "When were town privileges granted to the village of Düsseldorf and why ?"
}
],
"return_citations": True,
"return_related_questions": True
}
headers = {
"Authorization": "Bearer pplx-whatever_PERPLEXITY_API_KEY“,
"Content-Type": "application/json"
}
response = requests.request("POST", url, json=payload, headers=headers)
print(response.text)
produced this API response:
{
"id": "d1988dfd-e67b-4ef6-9ac8-4209b88a9628",
"model": "llama-3.1-sonar-large-128k-online",
"created": 1725820758,
"usage": {
"prompt_tokens": 21,
"completion_tokens": 77,
"total_tokens": 98
},
"object": "chat.completion",
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "Town privileges were granted to the village of D\u00fcsseldorf on **August 14, 1288**. This was a result of the victory of Count Adolf VIII of Berg over the Archbishop of Cologne and his allies in the Battle of Worringen. The Count's victory allowed him to award town privileges to D\u00fcsseldorf, elevating it to city status."},
"delta": {
"role": "assistant",
"content": ""
}
}
]
}
Which means: the perplexity's answer to the API-call is factually correct and spot-on.
However, when defining this API-call as skill for an AutoGen-agent and calling it with the exact same question, the answer is a prime example of catastrophic halucination.
The skill was defined based exactly on the API-call:
import requests
def perplexity_chat_completion(query: str) -> dict:
"""
Function to have the agent use the Perplexity API to obtain information regarding a specific search query via chat completion.
Args:
query (str): The search term the agent wants to chat about.
Returns:
dict: The search results returned by the Perplexity API in Python dictonary format.
"""
base_url = "https://api.perplexity.ai/chat/completions"
payload = {
"model": "llama-3.1-sonar-large-128k-online",
"messages": [
{
"role": "system",
"content": "Be precise and concise."
},
{
"role": "user",
"content": query
}
],
"temperature": 0,
"return_images": False,
}
headers = {
"Authorization": "Bearer pplx-whatever_PERPLEXITY_API_KEY", # Replaced with your actual API
"Content-Type": "application/json"
}
try:
# Make the request to the Perplexity API
response = requests.request("POST",base_url, json=payload, headers=headers )
# Check if the request was successful
if response.status_code == 200:
return response.json() # Return the JSON response as a dictionary
else:
response.raise_for_status() # Raise an error for unsuccessful requests
except requests.exceptions.RequestException as e:
print(f"An error occurred: {e}")
return {}
Using this skill on the exact samequestion, however, let to the follwoing catatstrophically wrong output. You can see from the following screenshot that EXACTLY the same user quers string was used as in the API call: "When were town privileges granted to the village of Düsseldorf and why ?"
and
Is there any explanation for this phenomenon that the individual API call to perplexity.ai works perfectly, but that the same API-call integrated in an agent skill returns hallucinated data ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I wonder if anyone here has encountered a similar problem or can give a hint in the direction of a solution:
for an agent workflow, I wanted to define a perplexity skill, aka giving a research agent the skill to use the perplexity API to obtain answers to user questions.
This let to the following, baffling result:
One can simulate perplexity's answer to an API call on their API documentation website:
https://docs.perplexity.ai/api-reference/chat-completions)
The following code:
produced this API response:
Which means: the perplexity's answer to the API-call is factually correct and spot-on.
However, when defining this API-call as skill for an AutoGen-agent and calling it with the exact same question, the answer is a prime example of catastrophic halucination.
The skill was defined based exactly on the API-call:
Using this skill on the exact samequestion, however, let to the follwoing catatstrophically wrong output. You can see from the following screenshot that EXACTLY the same user quers string was used as in the API call: "When were town privileges granted to the village of Düsseldorf and why ?"
and
Is there any explanation for this phenomenon that the individual API call to perplexity.ai works perfectly, but that the same API-call integrated in an agent skill returns hallucinated data ?
Beta Was this translation helpful? Give feedback.
All reactions