-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Blog post for OpenTelemetry Generative AI updates #5575
base: main
Are you sure you want to change the base?
Conversation
Co-authored-by: Liudmila Molkova <[email protected]>
content/en/blog/2024/otel-generative-ai/aspire_dashboard_trace.png
Outdated
Show resolved
Hide resolved
Co-authored-by: Liudmila Molkova <[email protected]>
Co-authored-by: Liudmila Molkova <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool one! Added some comments.
The below is about if we can make it zero code or not, and what's remaining to do so: I want to run I have a similar example, I've tried locally, and I don't see a way to implicitly configure the logging provider, yet. I'm not sure if we want to make a hybrid to reduce the amount of code or just leave the explicit tracing and logging stuff in until logging can be env configured. cc @anuraaga and @xrmx in case I got below wrong also requirements
env
Best I could manage was to add hooks only for the log/event stuff import os
from openai import OpenAI
# NOTE: OpenTelemetry Python Logs and Events APIs are in beta
from opentelemetry import _logs, _events
from opentelemetry.sdk._logs import LoggerProvider
from opentelemetry.sdk._events import EventLoggerProvider
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
_logs.set_logger_provider(LoggerProvider())
_logs.get_logger_provider().add_log_record_processor(BatchLogRecordProcessor(OTLPLogExporter()))
_events.set_event_logger_provider(EventLoggerProvider())
def main():
client = OpenAI()
messages = [
{
"role": "user",
"content": "Answer in up to 3 words: Which ocean contains the falkland islands?",
},
]
model = os.getenv("CHAT_MODEL", "gpt-4o-mini")
chat_completion = client.chat.completions.create(model=model, messages=messages)
print(chat_completion.choices[0].message.content)
if __name__ == "__main__":
main() then, I get a warning about overriding the event provider, but at least the events do show up $ dotenv run -- opentelemetry-instrument python main.py
Overriding of current EventLoggerProvider is not allowed
Indian Ocean |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess what's out of scope is metrics, as they aren't implemented yet. One concern is that folks follow this, then later when metrics are implemented, they need more instructions, or people need to remember to go back and change the docs etc.
Since there are recent developments like asyncapi etc going in fairly quickly, if metrics were added quickly also, would it make sense to hold the blog until they are released? or would it make more sense to do a second blog and revisit the setup instructions once that's supported?
Metrics are discussed as part of the semantic conventions, but correct, they are not yet implemented in the library yet. I think it's worth getting an article out there earlier than later. We might even attract some contributors for the Metrics implementation. |
good point! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great read, thanks for writing this!
content/en/blog/2024/otel-generative-ai/aspire-dashboard-content-capture.png
Outdated
Show resolved
Hide resolved
content/en/blog/2024/otel-generative-ai/aspire-dashboard-trace.png
Outdated
Show resolved
Hide resolved
--- | ||
title: OpenTelemetry for Generative AI | ||
linkTitle: OpenTelemetry for Generative AI | ||
date: 2024-11-09 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Putting this comment here to keep an eye on setting the date right when we finally publish. Do not resolve.
date: 2024-11-09 | |
date: 2024-12-02 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to have @chalin re-review the PR, and we shouldn't publish this week anyways due to thanksgiving
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initial drive-by comments, with a suggestion for fixing the build:
> hugo --cleanDestinationDir -e dev --buildDrafts --baseURL https://deploy-preview-5575--opentelemetry.netlify.app/ --minify
...
Error: error building site: process: readAndProcessContent: "/opt/build/repo/content/en/blog/2024/otel-generative-ai/index.md:8:1": failed to unmarshal YAML: yaml: line 8: did not find expected ',' or ']'
Failed during stage 'building site': Build script returned non-zero exit code: 2 (https://ntl.fyi/exit-code-2)
...
/fix:refcache |
You triggered fix:refcache action run at https://github.com/open-telemetry/opentelemetry.io/actions/runs/11987016837 |
IMPORTANT: (RE-)RUN
|
Up to date now. |
Title: OpenTelemetry for Generative AI
This blog post introduces enhancements to OpenTelemetry specifically tailored for generative AI technologies, focusing on the development of Semantic Conventions and the Python Instrumentation Library.
Samples are in Python
SIG: GenAI Observability
Sponsors: @tedsuo @lmolkova
Closes: #5581