Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added Llama v2 Model to Textbase with Replicate API, created example bot using Llama, and updated Docs accordingly #86

Open
wants to merge 44 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
6e3f2d3
Added support for google palm API
jahanvir Sep 2, 2023
3072aea
added palm API key in .env.example
jahanvir Sep 2, 2023
9797c82
added Google Palm AI example
jahanvir Sep 2, 2023
b7debab
Added Meta's Llama 2 API to textbase, updated docs accordingly
Ayan-Banerjee-918 Sep 3, 2023
dd8e110
Updated Readme and docs
Ayan-Banerjee-918 Sep 3, 2023
428b3f6
Removed replicate API key
Ayan-Banerjee-918 Sep 3, 2023
bc28bcf
Updated docs
Ayan-Banerjee-918 Sep 3, 2023
3e2a688
Make path absolute instead of cwd
sammyCofactory Sep 3, 2023
f6ba210
Merge pull request #88 from cofactoryai/make-path-absolute
sammyCofactory Sep 3, 2023
cb565ae
Merge branch 'main' of https://github.com/cofactoryai/textbase into c…
jahanvir Sep 3, 2023
6ddaec7
merged changes
jahanvir Sep 3, 2023
1ff289d
Merge branch 'cofactoryai-main'
jahanvir Sep 3, 2023
f6d73d9
add mac zip software to docs (#96)
kaustubh-cf Sep 3, 2023
a88e18e
added script to zip the requirement and main file for deploy.
jahanvir Sep 3, 2023
5cb2eb4
Delete zip.py
jahanvir Sep 3, 2023
056edb1
added zip function in textbase cli tool
jahanvir Sep 3, 2023
877fd15
Update llama-bot.md
Ayan-Banerjee-918 Sep 3, 2023
9e431ae
Update main.py
Ayan-Banerjee-918 Sep 3, 2023
2caa2a0
Custom port functionality (#107)
thenakulgupta-backup Sep 4, 2023
8d80bdc
Update README: Fix Broken Link (#142)
Haleshot Sep 5, 2023
658bf00
Fix typo in README.md (#141)
eltociear Sep 5, 2023
52506a9
Error handling (#104)
thenakulgupta-backup Sep 6, 2023
854102c
Added file check such that deploy.zip won't be created if either of m…
jahanvir Sep 6, 2023
c6e4b58
Merge branch 'main' of https://github.com/cofactoryai/textbase into p…
jahanvir Sep 6, 2023
995812c
resolved merge conflicts
jahanvir Sep 6, 2023
b0bf5fb
removed unused code.
jahanvir Sep 6, 2023
b0e0323
Merge branch 'main' of https://github.com/cofactoryai/textbase
jahanvir Sep 6, 2023
28316fa
added documentation for paLM AI
jahanvir Sep 6, 2023
891bdb6
Replaced the extract_user_content_values with extract_content_values
jahanvir Sep 6, 2023
c6c7d73
Merge pull request #70 from jahanvir/main
sammyCofactory Sep 7, 2023
e54e0ad
modified compress command to take directory path, check if main.py an…
jahanvir Sep 7, 2023
ffaab83
added the condition to avoid deploy.zip being created inside deploy.zip
jahanvir Sep 7, 2023
58fa7f3
piping out logs in the cli (#143)
kaustubh-cf Sep 7, 2023
f15c0a5
Merge branch 'main' into package-zip
sammyCofactory Sep 8, 2023
ba237aa
Merge pull request #102 from jahanvir/package-zip
sammyCofactory Sep 8, 2023
2ac3f35
Updated repo to latest build
Ayan-Banerjee-918 Sep 8, 2023
a021720
Added Meta's Llama 2 API to textbase, updated docs accordingly
Ayan-Banerjee-918 Sep 3, 2023
d80de50
Updated Readme and docs
Ayan-Banerjee-918 Sep 3, 2023
71171fc
Removed replicate API key
Ayan-Banerjee-918 Sep 3, 2023
03fda2c
Updated docs
Ayan-Banerjee-918 Sep 3, 2023
1d54f5b
Update llama-bot.md
Ayan-Banerjee-918 Sep 3, 2023
8de7761
Update main.py
Ayan-Banerjee-918 Sep 3, 2023
e1d3ae9
Updated repo to latest build
Ayan-Banerjee-918 Sep 8, 2023
b290921
Merge branch 'main' of https://github.com/Ayan-Banerjee-918/textbase
Ayan-Banerjee-918 Sep 8, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
OPENAI_API_KEY=
HUGGINGFACEHUB_API_TOKEN=
PALM_API_KEY=
3 changes: 2 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -125,4 +125,5 @@ cpcli-env/
dmypy.json

# Pyre type checker
.pyre/
.pyre/
.DS_STORE
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,9 @@ Since it is just Python you can use whatever models, libraries, vector databases

Coming soon:
- [x] [PyPI package](https://pypi.org/project/textbase-client/)
- [x] Easy web deployment via [textbase deploy](/docs/deployment/deploy-from-cli)
- [x] Easy web deployment via [textbase-client deploy](docs/docs/deployment/deploy-from-cli.md)
- [ ] SMS integration
- [ ] Native integration of other models (Claude, Llama, ...)
- [ ] Native integration of other models (Claude, ...)

![Demo Deploy GIF](assets/textbase-deploy.gif)

Expand Down Expand Up @@ -71,9 +71,9 @@ Path to the main.py file: examples/openai-bot/main.py # You can create a main.py
Now go to the link in blue color which is shown on the CLI and you will be able to chat with your bot!
![Local UI](assets/test_command.png)

### `Other commands have been mentioned in the documentaion website.` [Have a look](https://docs.textbase.ai/usage) 😃!
### `Other commands have been mentioned in the documentation website.` [Have a look](https://docs.textbase.ai/usage) 😃!


## Contributions

Contributions are welcome! Please open an issue or create a pull request.
Contributions are welcome! Please open an issue or create a pull request.
Binary file added docs/assets/mac_zip.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 2 additions & 1 deletion docs/docs/deployment/prerequisites.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ Before using any method, you need to ensure that:
3. Zip these two (or more) files into a `.zip` archive. It's important that it's a **.zip** archive and not anything else. If you are using MacOS, please read the note below.

## Important note for MacOS users
Please use [this](https://www.ezyzip.com/) website for creating archives as MacOS creates an extra `__MACOSX` folder when compressing using the native compress utility which causes some issues with our backend.
Please download the software `RAR Extractor MAX` from App Store ![Mac Zip Software](../../assets/mac_zip.png) for creating archives as MacOS creates an extra `__MACOSX` folder when compressing using the native compress utility which causes some issues with our backend.


## Folder structure
When you decide to archive the files, please **MAKE SURE** that main.py and requirements.txt are available in the **root** of the archive itself. As in if the zip is extracted, it will produce two (or more) files/folders.
Expand Down
51 changes: 51 additions & 0 deletions docs/docs/examples/llama-bot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
sidebar_position: 3
---

# Llama bot

This bot makes an API call to Meta's LLama 2 API and processes the user input. It uses Llama-2-7B model.

```py
from textbase import bot, Message
from textbase.models import Llama
from typing import List

# Load your Replicate API key
Llama.replicate_api_key = ""

# Default Prompt for Llama7b. States how the AI Model is supposed to act like
SYSTEM_PROMPT = """\
You are a helpful assistant"""

@bot()
def on_message(message_history: List[Message], state: dict = None):

# Generate Llama7b responses
bot_response = Llama.generate(
system_prompt=SYSTEM_PROMPT,
message_history=message_history, # Assuming history is the list of user messages
)

response = {
"data": {
"messages": [
{
"data_type": "STRING",
"value": bot_response
}
],
"state": state
},
"errors": [
{
"message": ""
}
]
}

return {
"status_code": 200,
"response": response
}
```
49 changes: 49 additions & 0 deletions docs/docs/examples/palmai-bot.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
sidebar_position: 2
---

# Google PaLM AI bot

This bot makes an API call to PaLMAI and processes the user input. It uses PaLM Chat.

```py
import os
from textbase import bot, Message
from textbase.models import PalmAI
from typing import List

# Load your PALM API key
# PALMAI.api_key = ""
# or from environment variable:
PalmAI.api_key = os.getenv("PALM_API_KEY")

@bot()
def on_message(message_history: List[Message], state: dict = None):

bot_response = PalmAI.generate(
message_history=message_history, # Assuming history is the list of user messages
)

response = {
"data": {
"messages": [
{
"data_type": "STRING",
"value": bot_response
}
],
"state": state
},
"errors": [
{
"message": ""
}
]
}

return {
"status_code": 200,
"response": response
}

```
2 changes: 1 addition & 1 deletion docs/docs/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ Coming soon:
- [x] [PyPI package](https://pypi.org/project/textbase-client/)
- [x] Easy web deployment via [textbase-client deploy](../docs/deployment/deploy-from-cli.md)
- [ ] SMS integration
- [ ] Native integration of other models (Claude, Llama, ...)
- [ ] Native integration of other models (Claude, ...)
7 changes: 6 additions & 1 deletion docs/docs/usage.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,12 +14,17 @@ This will start a local server and will give you a link which you can navigate t
```bash
textbase-client test
```
If you wish to run this in one go, you can make use of the `--path` flag
If you wish to run this in one go, you can make use of the `--path` and `--port` flags
```bash
textbase-client test --path=<path_to_main.py>
```
**If you wish to use the `--path` flag, make sure you have your path inside quotes.**

```bash
textbase-client test --port=8080
```
**Port 8080 is the default, but it's crucial to note that it's frequently used. If you have it open for another application, this flag lets you alter the backend server's port to prevent conflicts.**

### deploy
Before executing this command, make sure that
1. You have a `.zip` file which is made according to the instructions and folder structure given in the
Expand Down
41 changes: 41 additions & 0 deletions examples/llama-bot/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
from textbase import bot, Message
from textbase.models import Llama
from typing import List

# Load your Replicate API key
Llama.replicate_api_key = ""

# Prompt for Llama7b. Llama gets offensive with larger complicated system prompts. This works just fine
SYSTEM_PROMPT = """\
You are a helpful assistant"""

@bot()
def on_message(message_history: List[Message], state: dict = None):

# Generate Llama7b response by default
bot_response = Llama.generate(
system_prompt=SYSTEM_PROMPT,
message_history=message_history, # Assuming history is the list of user messages
)

response = {
"data": {
"messages": [
{
"data_type": "STRING",
"value": bot_response
}
],
"state": state
},
"errors": [
{
"message": ""
}
]
}

return {
"status_code": 200,
"response": response
}
28 changes: 23 additions & 5 deletions examples/openai-bot/main.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from textbase import bot, Message
from textbase.models import OpenAI
from typing import List
import click

# Load your OpenAI API key
OpenAI.api_key = ""
Expand All @@ -15,11 +16,28 @@
def on_message(message_history: List[Message], state: dict = None):

# Generate GPT-3.5 Turbo response
bot_response = OpenAI.generate(
system_prompt=SYSTEM_PROMPT,
message_history=message_history, # Assuming history is the list of user messages
model="gpt-3.5-turbo",
)
try:
bot_response = OpenAI.generate(
system_prompt=SYSTEM_PROMPT,
message_history=message_history, # Assuming history is the list of user messages
model="gpt-3.5-turbo",
)
except Exception as e :
click.secho(str(e), fg='red')
return {
"status_code": 500,
"response": {
"data": {
"messages": [],
"state": state
},
"errors": [
{
"message": str(e)
}
]
}
}

response = {
"data": {
Expand Down
39 changes: 39 additions & 0 deletions examples/palmai-bot/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import os
from textbase import bot, Message
from textbase.models import PalmAI
from typing import List

# Load your PALM API key
# PALMAI.api_key = ""
# or from environment variable:
PalmAI.api_key = os.getenv("PALM_API_KEY")


@bot()
def on_message(message_history: List[Message], state: dict = None):

bot_response = PalmAI.generate(
message_history=message_history, # Assuming history is the list of user messages
)

response = {
"data": {
"messages": [
{
"data_type": "STRING",
"value": bot_response
}
],
"state": state
},
"errors": [
{
"message": ""
}
]
}

return {
"status_code": 200,
"response": response
}
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@ tabulate = "^0.9.0"
functions-framework = "^3.4.0"
yaspin = "^3.0.0"
pydantic = "^2.3.0"
google-generativeai = "^0.1.0"
rich = "^13.5.2"
replicate = "^0.11.0"

[build-system]
requires = ["poetry-core"]
Expand Down
65 changes: 64 additions & 1 deletion textbase/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import time
import typing
import traceback
import replicate
import google.generativeai as palm

from textbase import Message

Expand Down Expand Up @@ -143,4 +145,65 @@ def generate(
data = json.loads(response.text) # parse the JSON data into a dictionary
message = data['message']

return message
return message

class PalmAI:
api_key = None

@classmethod
def generate(
cls,
message_history: list[Message],
):

assert cls.api_key is not None, "Palm API key is not set."
palm.configure(api_key=cls.api_key)

filtered_messages = []

for message in message_history:
#list of all the contents inside a single message
contents = extract_content_values(message)
if contents:
filtered_messages.extend(contents)

#send request to Google Palm chat API
response = palm.chat(messages=filtered_messages)

print(response)
return response.last

class Llama:

@classmethod
def generate(
cls,
system_prompt: str,
message_history: list[Message],
model = "a16z-infra/llama7b-v2-chat:4f0a4744c7295c024a1de15e1a63c880d3da035fa1f49bfd344fe076074c8eea",
temperature = 0.81, #seems to give the best responses
top_p = 0.95,
max_length = 3008 #same for this
):
try:
assert cls.replicate_api_key is not None, "Replicate API key is not set."
client=replicate.Client(api_token=cls.replicate_api_key) #creating the client for the model

past_conversation=""

for message in message_history:
if message["role"] == "user":
past_conversation+="[INST] ".join(extract_content_values(message))+" [/INST]\n" #Llama2 paper mentions tokenization of user messages with INST delimiter, otherwise the responses get ugly with time.
else:
past_conversation+=" ".join(extract_content_values(message))+"\n" #response messages dont need any delimiters

response=client.run(model,input={"prompt": past_conversation,"system_prompt": system_prompt,"temperature":temperature,"top_p":top_p,"max_length":max_length,"repetition_penalty":1})

resp=""
for word in response:
resp+=word

return resp

except Exception:
print(f"An exception occured while using this model, please try using another model.\nException: {traceback.format_exc()}.")
Loading