Phidata is a toolkit for building AI Assistants using function calling.
Function calling enables LLMs to achieve tasks by calling functions and intelligently choosing their next step based on the response, just like how humans solve problems.
- Step 1: Create an
Assistant
- Step 2: Add Tools (functions), Knowledge (vectordb) and Storage (database)
- Step 3: Serve using Streamlit, FastApi or Django to build your AI application
pip install -U phidata
Create a file assistant.py
and install openai using pip install openai
from phi.assistant import Assistant
assistant = Assistant(description="You help people with their health and fitness goals.")
assistant.print_response("Share a quick healthy breakfast recipe.", markdown=True)
Run the Assistant
python assistant.py
Let it search the web
from phi.assistant import Assistant
from phi.tools.duckduckgo import DuckDuckGo
assistant = Assistant(tools=[DuckDuckGo()], show_tool_calls=True)
assistant.print_response("Whats happening in France?", markdown=True)
Install duckduckgo-search
and run the Assistant
pip install duckduckgo-search
python assistant.py
The PythonAssistant
can perform virtually any task using python code.
Create a file python_assistant.py
and install pandas using pip install pandas
from phi.assistant.python import PythonAssistant
from phi.file.local.csv import CsvFile
python_assistant = PythonAssistant(
files=[
CsvFile(
path="https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
description="Contains information about movies from IMDB.",
)
],
pip_install=True,
show_tool_calls=True,
)
python_assistant.print_response("What is the average rating of movies?", markdown=True)
Run the python_assistant.py
file
python python_assistant.py
The DuckDbAssistant
can perform data analysis using SQL.
Create a file data_assistant.py
and install duckdb using pip install duckdb
import json
from phi.assistant.duckdb import DuckDbAssistant
duckdb_assistant = DuckDbAssistant(
semantic_model=json.dumps({
"tables": [
{
"name": "movies",
"description": "Contains information about movies from IMDB.",
"path": "https://phidata-public.s3.amazonaws.com/demo_data/IMDB-Movie-Data.csv",
}
]
}),
)
duckdb_assistant.print_response("What is the average rating of movies? Show me the SQL.", markdown=True)
Run the data_assistant.py
file
python data_assistant.py
Checkout these AI apps showcasing the advantage of function calling:
- PDF AI that summarizes and answers questions from PDFs.
- ArXiv AI that answers questions about ArXiv papers using the ArXiv API.
- HackerNews AI that interacts with the HN API to summarize stories, users, find out what's trending, summarize topics.
- Demo Streamlit App serving a PDF, Image and Website Assistant (password: admin)
- Demo FastApi serving a PDF Assistant.
- Create a file
api_assistant.py
that can call the HackerNews API to get top stories.
import json
import httpx
from phi.assistant import Assistant
def get_top_hackernews_stories(num_stories: int = 10) -> str:
"""Use this function to get top stories from Hacker News.
Args:
num_stories (int): Number of stories to return. Defaults to 10.
Returns:
str: JSON string of top stories.
"""
# Fetch top story IDs
response = httpx.get('https://hacker-news.firebaseio.com/v0/topstories.json')
story_ids = response.json()
# Fetch story details
stories = []
for story_id in story_ids[:num_stories]:
story_response = httpx.get(f'https://hacker-news.firebaseio.com/v0/item/{story_id}.json')
story = story_response.json()
if "text" in story:
story.pop("text", None)
stories.append(story)
return json.dumps(stories)
assistant = Assistant(tools=[get_top_hackernews_stories], show_tool_calls=True)
assistant.print_response("Summarize the top stories on hackernews?", markdown=True)
- Run the
api_assistant.py
file
python api_assistant.py
- See it work through the problem
╭──────────┬───────────────────────────────────────────────────────────────────╮
│ Message │ Summarize the top stories on hackernews? │
├──────────┼───────────────────────────────────────────────────────────────────┤
│ Response │ │
│ (51.1s) │ • Running: get_top_hackernews_stories(num_stories=5) │
│ │ │
│ │ Here's a summary of the top stories on Hacker News: │
│ │ │
│ │ 1 Boeing Whistleblower: Max 9 Production Line Has "Enormous │
│ │ Volume of Defects" A whistleblower has revealed that Boeing's │
│ │ Max 9 production line is riddled with an "enormous volume of │
│ │ defects," with instances where bolts were not installed. The │
│ │ story has garnered attention with a score of 140. Read more │
│ │ 2 Arno A. Penzias, 90, Dies; Nobel Physicist Confirmed Big Bang │
│ │ Theory Arno A. Penzias, a Nobel Prize-winning physicist known │
│ │ for his work that confirmed the Big Bang Theory, has passed │
│ │ away at the age of 90. His contributions to science have been │
│ │ significant, leading to discussions and tributes in the │
│ │ scientific community. The news has a score of 207. Read more │
│ │ 3 Why the fuck are we templating YAML? (2019) This provocative │
│ │ article from 2019 questions the proliferation of YAML │
│ │ templating in software, sparking a larger conversation about │
│ │ the complexities and potential pitfalls of this practice. With │
│ │ a substantial score of 149, it remains a hot topic of debate. │
│ │ Read more │
│ │ 4 Forging signed commits on GitHub Researchers have discovered a │
│ │ method for forging signed commits on GitHub which is causing │
│ │ concern within the tech community about the implications for │
│ │ code security and integrity. The story has a current score of │
│ │ 94. Read more │
│ │ 5 Qdrant, the Vector Search Database, raised $28M in a Series A │
│ │ round Qdrant, a company specializing in vector search │
│ │ databases, has successfully raised $28 million in a Series A │
│ │ funding round. This financial milestone indicates growing │
│ │ interest and confidence in their technology. The story has │
│ │ attracted attention with a score of 55. Read more │
╰──────────┴───────────────────────────────────────────────────────────────────╯
One of our favorite features is generating structured data (i.e. a pydantic model) from sparse information.
Meaning we can use Assistants to return pydantic models and generate content which previously could not be possible.
In this example, our movie assistant generates an object of the MovieScript
class.
- Create a file
pydantic_assistant.py
from typing import List
from pydantic import BaseModel, Field
from rich.pretty import pprint
from phi.assistant import Assistant
class MovieScript(BaseModel):
setting: str = Field(..., description="Provide a nice setting for a blockbuster movie.")
ending: str = Field(..., description="Ending of the movie. If not available, provide a happy ending.")
genre: str = Field(..., description="Genre of the movie. If not available, select action, thriller or romantic comedy.")
name: str = Field(..., description="Give a name to this movie")
characters: List[str] = Field(..., description="Name of characters for this movie.")
storyline: str = Field(..., description="3 sentence storyline for the movie. Make it exciting!")
movie_assistant = Assistant(
description="You help people write movie ideas.",
output_model=MovieScript,
)
pprint(movie_assistant.run("New York"))
- Run the
pydantic_assistant.py
file
python pydantic_assistant.py
- See how the assistant generates a structured output
MovieScript(
│ setting='A bustling and vibrant New York City',
│ ending='The protagonist saves the city and reconciles with their estranged family.',
│ genre='action',
│ name='City Pulse',
│ characters=['Alex Mercer', 'Nina Castillo', 'Detective Mike Johnson'],
│ storyline='In the heart of New York City, a former cop turned vigilante, Alex Mercer, teams up with a street-smart activist, Nina Castillo, to take down a corrupt political figure who threatens to destroy the city. As they navigate through the intricate web of power and deception, they uncover shocking truths that push them to the brink of their abilities. With time running out, they must race against the clock to save New York and confront their own demons.'
)
Lets create a PDF Assistant that can answer questions from a PDF. We'll use PgVector
for knowledge and storage.
Knowledge Base: information that the Assistant can search to improve its responses (uses a vector db).
Storage: provides long term memory for Assistants (uses a database).
- Run PgVector
- Install docker desktop for running PgVector in a container.
- Create a file
resources.py
with the following contents
from phi.docker.app.postgres import PgVectorDb
from phi.docker.resources import DockerResources
# -*- PgVector running on port 5432:5432
vector_db = PgVectorDb(
pg_user="ai",
pg_password="ai",
pg_database="ai",
debug_mode=True,
)
# -*- DockerResources
dev_docker_resources = DockerResources(apps=[vector_db])
- Start
PgVector
using
phi start resources.py -y
- Create PDF Assistant
- Create a file
pdf_assistant.py
import typer
from rich.prompt import Prompt
from typing import Optional, List
from phi.assistant import Assistant
from phi.storage.assistant.postgres import PgAssistantStorage
from phi.knowledge.pdf import PDFUrlKnowledgeBase
from phi.vectordb.pgvector import PgVector2
from resources import vector_db
knowledge_base = PDFUrlKnowledgeBase(
urls=["https://phi-public.s3.amazonaws.com/recipes/ThaiRecipes.pdf"],
vector_db=PgVector2(
collection="recipes",
db_url=vector_db.get_db_connection_local(),
),
)
# Comment out after first run
knowledge_base.load(recreate=False)
storage = PgAssistantStorage(
table_name="pdf_assistant",
db_url=vector_db.get_db_connection_local(),
)
def pdf_assistant(new: bool = False, user: str = "user"):
run_id: Optional[str] = None
if not new:
existing_run_ids: List[str] = storage.get_all_run_ids(user)
if len(existing_run_ids) > 0:
run_id = existing_run_ids[0]
assistant = Assistant(
run_id=run_id,
user_id=user,
knowledge_base=knowledge_base,
storage=storage,
# use_tools=True adds functions to
# search the knowledge base and chat history
use_tools=True,
show_tool_calls=True,
# Uncomment the following line to use traditional RAG
# add_references_to_prompt=True,
)
if run_id is None:
run_id = assistant.run_id
print(f"Started Run: {run_id}\n")
else:
print(f"Continuing Run: {run_id}\n")
while True:
message = Prompt.ask(f"[bold] :sunglasses: {user} [/bold]")
if message in ("exit", "bye"):
break
assistant.print_response(message, markdown=True)
if __name__ == "__main__":
typer.run(pdf_assistant)
- Install libraries
pip install -U pgvector pypdf psycopg sqlalchemy
- Run PDF Assistant
python pdf_assistant.py
- Ask a question:
How do I make pad thai?
- See how the Assistant searches the knowledge base and returns a response.
Show output
Started Run: d28478ea-75ed-4710-8191-22564ebfb140
INFO Loading knowledge base
INFO Reading:
https://www.family-action.org.uk/content/uploads/2019/07/meals-more-recipes.pdf
INFO Loaded 82 documents to knowledge base
😎 user : How do I make chicken tikka salad?
╭──────────┬─────────────────────────────────────────────────────────────────────────────────╮
│ Message │ How do I make chicken tikka salad? │
├──────────┼─────────────────────────────────────────────────────────────────────────────────┤
│ Response │ │
│ (7.2s) │ • Running: search_knowledge_base(query=chicken tikka salad) │
│ │ │
│ │ I found a recipe for Chicken Tikka Salad that serves 2. Here are the │
│ │ ingredients and steps: │
│ │ │
│ │ Ingredients: │
...
- Message
bye
to exit, start the assistant again usingpython pdf_assistant.py
and ask:
What was my last message?
See how the assistant now maintains storage across sessions.
- Run the
pdf_assistant.py
file with the--new
flag to start a new run.
python pdf_assistant.py --new
- Stop PgVector
Play around and then stop PgVector
using phi stop resources.py
phi stop resources.py -y
Let's build an AI App using GPT-4 as the LLM, Streamlit as the chat interface, FastApi as the API and PgVector for knowledge and storage. Read the full tutorial here.
Create your codebase using the ai-app
template
phi ws create -t ai-app -n ai-app
This will create a folder ai-app
with a pre-built AI App that you can customize and make your own.
Streamlit allows us to build micro front-ends and is extremely useful for building basic applications in pure python. Start the app
group using:
phi ws up --group app
Press Enter to confirm and give a few minutes for the image to download.
- Open localhost:8501 to view streamlit apps that you can customize and make your own.
- Click on PDF Assistant in the sidebar
- Enter a username and wait for the knowledge base to load.
- Choose either the
RAG
orAutonomous
Assistant type. - Ask "How do I make pad thai?"
- Upload PDFs and ask questions
We provide a default PDF of ThaiRecipes that you can clear using the
Clear Knowledge Base
button. The PDF is only for testing.
Streamlit is great for building micro front-ends but any production application will be built using a front-end framework like next.js
backed by a RestApi built using a framework like FastApi
.
Your AI App comes ready-to-use with FastApi endpoints.
- Update the
workspace/settings.py
file and setdev_api_enabled=True
...
ws_settings = WorkspaceSettings(
...
# Uncomment the following line
dev_api_enabled=True,
...
- Start the
api
group using:
phi ws up --group api
Press Enter to confirm and give a few minutes for the image to download.
-
View API Endpoints
-
Open localhost:8000/docs to view the API Endpoints.
-
Load the knowledge base using
/v1/assitants/load-knowledge-base
-
Test the
v1/assitants/chat
endpoint with{"message": "How do I make chicken curry?"}
-
The Api comes pre-built with endpoints that you can integrate with your front-end.
A jupyter notebook is a must-have for AI development and your ai-app
comes with a notebook pre-installed with the required dependencies. Enable it by updating the workspace/settings.py
file:
...
ws_settings = WorkspaceSettings(
...
# Uncomment the following line
dev_jupyter_enabled=True,
...
Start jupyter
using:
phi ws up --group jupyter
Press Enter to confirm and give a few minutes for the image to download (only the first time). Verify container status and view logs on the docker dashboard.
-
Open localhost:8888 to view the Jupyterlab UI. Password: admin
-
Play around with cookbooks in the
notebooks
folder. -
Delete local resources
Play around and stop the workspace using:
phi ws down
Read how to run your AI App on AWS.
- You can find the full documentation here
- You can also chat with us on discord
- Or email us at [email protected]
After building an Assistant, serve it using Streamlit, FastApi or Django to build your AI application. Instead of wiring tools manually, phidata provides pre-built templates for AI Apps that you can run locally or deploy to AWS with 1 command. Here's how they work:
- Create your AI App using a template:
phi ws create
- Run your app locally:
phi ws up
- Run your app on AWS:
phi ws up prd:aws
We've helped many companies build AI for their products, the general workflow is:
- Train an assistant with proprietary data to perform tasks specific to your product.
- Connect your product to the assistant via an API.
- Customize, Monitor and Improve the AI.
We provide dedicated support and development for AI products. Book a call to get started.
We're an open-source project and welcome contributions, please read the contributing guide for more information.
- If you have a feature request, please open an issue or make a pull request.
- If you have ideas on how we can improve, please create a discussion.
Our roadmap is available here. If you have a feature request, please open an issue/discussion.