Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

beta 2 build changes #3

Open
wants to merge 25 commits into
base: mainline
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
a51fc05
WIP - basic openai instrumenting to whylabs oltp exporter
jamie256 Feb 1, 2024
1f289cd
Rename llm-traceguard to openllmtelemetry, openai instrumentation, ad…
jamie256 Feb 27, 2024
f1a75ce
Fix _instruments versions, add simple test.ipynb calling openai
jamie256 Mar 22, 2024
3216a13
Merge branch 'mainline' into dev/jamie/instrument_openai
jamie256 Mar 22, 2024
98d6332
Add ruff
Mar 25, 2024
98cd8b9
Merge from mainline and cleanup
jamie256 Mar 26, 2024
0dc1732
Formatting fixes, types, add langkit as optional for tests
jamie256 Mar 28, 2024
1bd626f
Update version to beta build
jamie256 Apr 16, 2024
cd7e630
Add bedrock and langkit guard
Apr 16, 2024
ffaa8b0
Clean up bedrock example
Apr 16, 2024
d31d49a
Rename to SecureApi instead of Guard
Apr 16, 2024
1c6b4aa
Rename Guard -> Secure for bedrock
Apr 16, 2024
426c181
Merge branch 'dev-openllmtelemetry' into dev/release_0.0.1.b2
jamie256 Apr 16, 2024
a19f461
Update version to b2 build, regen poetry.lock
jamie256 Apr 16, 2024
30a0890
Fix client parameters
jamie256 Apr 16, 2024
4007d24
Minor updates to diagnosting logging output and verbosity
jamie256 Apr 16, 2024
a242cfa
Remove errant warning and simplify return path for debug span exporter
jamie256 Apr 16, 2024
4bd4e35
Test fixes
jamie256 Apr 16, 2024
e21fe75
Add an OpenAI integration section with code snippet to README.md
jamie256 Apr 16, 2024
92c3b90
update .bumpversion.cfg
jamie256 Apr 16, 2024
f547665
Add docs link to README
jamie256 Apr 16, 2024
1f228a4
Add bedrock example to README and make WhyLabsSecureAPI optional to c…
jamie256 Apr 16, 2024
0b2c691
Cleanup optional WhyLabsSecureAPI and add bedrock span attributes, mi…
jamie256 Apr 16, 2024
dcbe5ff
set span type on Titan models
jamie256 Apr 16, 2024
82c9bf6
Update version to b3 build
jamie256 Apr 17, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .bumpversion.cfg
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.0.1-dev0
current_version = 0.0.1.b3
tag = False
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(\-(?P<release>[a-z]+)(?P<build>\d+))?
serialize =
Expand Down
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ lib64/
parts/
sdist/
var/
venv/
wheels/
share/python-wheels/
*.egg-info/
Expand Down Expand Up @@ -77,4 +78,7 @@ target/

# pyenv
.python-versi

# VSCode
.vscode/
.ruff_cache
5 changes: 4 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ format-fix: ## Fix formatting issues
fix: lint-fix format-fix ## Fix all linting and formatting issues

install: ## Install dependencies with poetry
poetry install
poetry install -E "openai" -E "bedrock"

test: ## Run unit tests
poetry run pytest -vvv -s -o log_level=INFO -o log_cli=true tests/
Expand All @@ -28,6 +28,9 @@ integ: ## Run integration tests
dist: ## Build the distribution
poetry build

clean: ## remove build artifacts
rm -rf ./dist/*

bump-patch: ## Bump the patch version (_._.X) everywhere it appears in the project
@$(call i, Bumping the patch number)
poetry run bumpversion patch --allow-dirty
Expand Down
103 changes: 94 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# llm-traceguard
# OpenLLMTelemetry

**llm-traceguard** is an open-source Python library that provides Open Telemetry integration with Large Language Models (LLMs). It is designed to facilitate tracing and safeguarding applications that leverage LLMs, ensuring better monitoring and reliability.
`openllmtelemetry` is an open-source Python library that provides Open Telemetry integration with Large Language Models (LLMs). It is designed to facilitate tracing applications that leverage LLMs and Generative AI, ensuring better observability and monitoring.

## Features

Expand All @@ -10,20 +10,102 @@

## Installation

To install **llm-traceguard**, simply use pip:
To install `openllmtelemetry` simply use pip:

```bash
pip install llm-traceguard
pip install openllmtelemetry
```

## Usage 🚀

Here's a basic example of how to use **llm-traceguard** in your project:
Here's a basic example of how to use **OpenLLMTelemetry** in your project:

First you need to setup a few environment variables to specify where you want your LLM telemetry to be sent, and make sure you also have any API keys set for interacting with your LLM and for sending the telemetry to [WhyLabs](https://whylabs.ai/free?utm_source=openllmtelemetry-Github&utm_medium=openllmtelemetry-readme&utm_campaign=WhyLabs_Secure)



```python
import os

os.environ["WHYLABS_DEFAULT_DATASET_ID"] = "your-model-id" # e.g. model-1
os.environ["WHYLABS_API_KEY"] = "replace-with-your-whylabs-api-key"

```

After you verify your env variables are set you can now instrument your app by running the following:

```python
import openllmtelemetry

openllmtelemetry.instrument()
```

This will automatically instrument your calls to LLMs to gather open telemetry traces and send these to WhyLabs.

## Integration: OpenAI
Integration with an OpenAI application is straightforward with `openllmtelemetry` package.

First, you need to set a few environment variables. This can be done via your container set up or via code.

```python
import llm_traceguard
import os

llm_traceguard.instrument()
os.environ["WHYLABS_API_KEY"] = "<your-whylabs-api-key>"
os.environ["WHYLABS_DEFAULT_DATASET_ID"] = "<your-llm-resource-id>"
os.environ["WHYLABS_GUARD_ENDPOINT"] = "<your container endpoint>"
os.environ["WHYLABS_GUARD_API_KEY"] = "internal-secret-for-whylabs-Secure"
```

Once this is done, all of your OpenAI interactions will be automatically traced. If you have rulesets enabled for blocking in WhyLabs Secure policy, the library will block requests accordingly

```python
from openai import OpenAI
client = OpenAI()

response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{
"role": "system",
"content": "You are a helpful chatbot. "
},
{
"role": "user",
"content": "Aren't noodles amazing?"
}
],
temperature=0.7,
max_tokens=64,
top_p=1
)
```

## Integration: Amazon Bedrock

One of the nice things about `openllmtelemetry` is that a single call to intrument your app can work across various LLM providers, using the same instrument call above, you can also invoke models using the boto3 client's bedrock-runtime and interaction with LLMs such as Titan and you get the same level of telemetry extracted and sent to WhyLabs

Note: you may have to test that your boto3 credentials are working to be able to use the below example
For details see [boto3 documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html)

```python
import json
import boto3


def bedrock_titan(prompt: str):
try:
model_id = 'amazon.titan-text-express-v1'
brt = boto3.client(service_name='bedrock-runtime')
response = brt.invoke_model(body=json.dumps({"inputText": prompt}), modelId=model_id)
response_body = json.loads(response.get("body").read())

except Exception as error:
logger.error(f"A client error occurred:{error}")

return response_body

response = bedrock_titan("What is your name and what is the origin and reason for that name?")
print(response)
```

## Requirements 📋
Expand All @@ -38,8 +120,11 @@ Contributions are welcome! For major changes, please open an issue first to disc

## License 📄

**llm-traceguard** is licensed under the Apache-2.0 License. See [LICENSE](LICENSE) for more details.
**OpenLLMTelemetry** is licensed under the Apache-2.0 License. See [LICENSE](LICENSE) for more details.

## Contact 📧

For support or any questions, feel free to contact us at [email protected].
For support or any questions, feel free to contact us at [email protected].

## Documentation
More documentation can be found here on WhyLabs site: https://whylabs.ai/docs/
4 changes: 0 additions & 4 deletions llm_traceguard/__init__.py

This file was deleted.

2 changes: 0 additions & 2 deletions llm_traceguard/instrument.py

This file was deleted.

1 change: 0 additions & 1 deletion llm_traceguard/version.py

This file was deleted.

113 changes: 113 additions & 0 deletions notebooks/test.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"%pip install openai"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"\n",
"os.environ['TRACE_PROMPT_AND_RESPONSE'] = 'true'\n",
"os.environ[\"WHYLABS_DEFAULT_ORG_ID\"] = \"\" # set your org id\n",
"os.environ[\"WHYLABS_API_KEY\"] = \"\"\n",
"os.environ[\"WHYLABS_DEFAULT_DATASET_ID\"] = \"\" #\n",
"# os.environ[\"WHYLABS_API_ENDPOINT\"] = \"\"\n",
"# os.environ[\"WHYLABS_TRACES_ENDPOINT\"] =\"\"\n",
"# os.environ[\"OPENAI_API_KEY\"] = \"\""
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<opentelemetry.trace.ProxyTracer at 0x7fd920137400>"
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import openllmtelemetry\n",
"\n",
"openllmtelemetry.instrument()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"ChatCompletion(id='chatcmpl-95PI3UrvG8c5bbHLDf197AcdaPLK8', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='Positive', role='assistant', function_call=None, tool_calls=None))], created=1711075883, model='gpt-3.5-turbo-0125', object='chat.completion', system_fingerprint='fp_3bc1b5746c', usage=CompletionUsage(completion_tokens=1, prompt_tokens=40, total_tokens=41))"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"from openai import OpenAI\n",
"client = OpenAI()\n",
"\n",
"response = client.chat.completions.create(\n",
" model=\"gpt-3.5-turbo\",\n",
" messages=[\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"You will be provided with a tweet, and your task is to classify its sentiment as positive, neutral, or negative.\"\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": \"I love lasagna!\"\n",
" }\n",
" ],\n",
" temperature=0.7,\n",
" max_tokens=64,\n",
" top_p=1\n",
")\n",
"\n",
"response"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "openllmtelemetry-D7f_qCE2-py3.9",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.18"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
3 changes: 3 additions & 0 deletions openllmtelemetry/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from openllmtelemetry.instrument import instrument

__ALL__ = [instrument]
Loading
Loading