Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify the mapping between Google Cloud Logging and OpenTelemetry #3775

Closed
wants to merge 2 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 71 additions & 0 deletions specification/compatibility/googlecloud_logging.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
<!--- Hugo front matter used to generate the website version of this page:
linkTitle: Google Cloud Logging
--->

# Google Cloud Logging

**Status**: [Experimental](../document-status.md)

Google Cloud logging uses the [LogEntry](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry) to
cary log information. In this section the JSON representation is used for mapping of the fields to OpenTelemetry fields
and attributes.

## Semantic Mapping

Some of the attributes can be moved to OpenTelemetry Semantic Conventions.

| Field | Type | Description | Maps to Unified Model Field |
|-------------|-----------|----------------------------------------|--------------------------------|
| `insert_id` | `boolean` | A unique identifier for the log entry. | `Attributes["log.record.uid"]` |

## Severity Mapping

Mapping from [Google Cloud Log Severity](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity)
to the OpenTelemetry Severity Number

| CloudLog | Severity Number | CloudLog Description |
|------------------|------------------|----------------------------------------------------------------------------------------|
| `DEFAULT`(0) | `UNSPECIFIED`(0) | The log entry has no assigned severity level. |
| `DEBUG`(100) | `DEBUG`(5) | Debug or trace information. |
| `INFO`(200) | `INFO`(9) | Routine information, such as ongoing status or performance. |
| `NOTICE`(300) | `INFO2`(10) | Normal but significant events, such as start up, shut down, or a configuration change. |
| `WARNING`(400) | `WARN`(13) | Warning events might cause problems. |
| `ERROR`(500) | `ERROR`(17) | Error events are likely to cause problems. |
| `CRITICAL`(600) | `FATAL`(21) | Critical events cause more severe problems or outages. |
| `ALERT`(700) | `FATAL2`(22) | A person must take an action immediately. |
| `EMERGENCY`(800) | `FATAL4`(24) | One or more systems are unusable. |

Mapping from OpenTelemetry Severity Number to a
[Google Cloud Log Severity](https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#LogSeverity)

| Severity Number | CloudLog |
|-----------------------------|------------------|
| `UNSPECIFIED`(0) | `DEFAULT`(0) |
| `TRACE`(1) - `DEBUG4`(8) | `DEBUG`(100) |
| `INFO`(9) | `INFO`(200) |
| `INFO2`(10) - `INFO4`(12) | `NOTICE`(300) |
| `WARN`(13) - `WARN4`(16) | `WARNING`(400) |
| `ERROR`(17) - `ERROR4`(20 | `ERROR`(500) |
| `FATAL`(21) | `CRITICAL`(600) |
| `FATAL2`(22) - `FATAL3`(23) | `ALERT`(700) |
| `FATAL4`(24) | `EMERGENCY`(800) |

## Miscellaneous

| Field | Type | Description | Maps to Unified Model Field |
|--------------------|--------------------------|------------------------------------------------------------------------------------------------------------------|-----------------------------------------------|
| `timestamp` | `string` | The time the event described by the log entry occurred. | `Timestamp` |
| `receiveTimestamp` | `string` | The time the log entry was received. | `ObservedTimestamp` |
| `logName` | `string` | The URL-encoded log ID suffix of the log_name field identifies which log stream this entry belongs to. | `Attributes["gcp.log_name"]` (string) |
| `jsonPayload` | `google.protobuf.Struct` | The log entry payload, represented as a structure that is expressed as a JSON object. | `Body` (KVList) |
| `protoPayload` | `google.protobuf.Any` | The log entry payload, represented as a protocol buffer. | `Body` (KVList, key from JSON representation) |
| `textPayload` | `string` | The log entry payload, represented as a Unicode string (UTF-8). | `Body` (string) |
| `trace` | `string` | The trace associated with the log entry, if any. | `TraceId` |
| `spanId` | `string` | The span ID within the trace associated with the log entry. | `SpanId` |
| `traceSampled` | `boolean` | The sampling decision of the trace associated with the log entry. | `TraceFlags.SAMPLED` |
| `labels` | `map<string,string>` | A set of user-defined (key, value) data that provides additional information about the log entry. | `Attributes` |
| `resource` | `MonitoredResource` | The monitored resource that produced this log entry. | `Resource["gcp.*"]` |
| `httpRequest` | `HttpRequest` | The HTTP request associated with the log entry, if any. | `Attributes["gcp.http_request"]` (KVList) |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could be worth attempting to map these to http semconv? not sure why I didn't do it initially.

  • requestMethod => http.request.method
  • requestUrl => url.full
  • requestSize => see [0] below
  • status => http.response.status_code
  • responseSize => see [0] below
  • userAgent => user_agent.original
  • remoteIp => client.address
  • serverIp => server.address
  • referer => http.request.header.referer[0]
  • protocol => net.protocol.name (lower case), net.protocol.version

The remaining ones can be mapped to gcp.*:

  • latency
  • cacheLookup
  • cacheHit
  • cacheValidatedWithOriginServer
  • cacheFillBytes

[0] I had previously opened open-telemetry/semantic-conventions#84 to add "whole-request" size attributes, ie headers + body. I will rebase / update the PR this week to see if it gets any traction.

| `operation` | `LogEntryOperation` | Information about a operation associated with the log entry. | `Attributes["gcp.operation"]` (KVList) |
| `sourceLocation` | `LogEntrySourceLocation` | Source code location information associated with the log entry. | `Attributes["gcp.source_location"]` (KVList) |

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking at docs for LogEntrySourceLocation I this ought to be mapped to the equivalent semconvcode.* attributes as follows:

  • sourceLocation.file => code.filepath
  • sourceLocation.line => code.lineno

Less sure of what to do with sourceLocation.function, which is described as

Human-readable name of the function or method being invoked, with optional context such as the class or package name. This information may be used in contexts such as the logs viewer, where a file and line number are less meaningful. The format can vary by language. For example: qual.if.ied.Class.method (Java), dir/package.func (Go), function (Python).

it ought to go into the code.namespace and code.function attributes. if you happen to have a sample of data across different languages for which gcp has a logging integration, perhaps the right thing to do can be figured out based on that? failing that, a reasonable best effort might be (python style split / indexing):

  • code.function gets sourceLocation.function.split(".")[-1]
  • code.namespace gets sourceLocation.function.split(".")[:-1]

| `split` | `LogSplit` | Information indicating this LogEntry is part of a sequence of multiple log entries split from a single LogEntry. | `Attributes["gcp.log_split"]` (KVList) |
19 changes: 0 additions & 19 deletions specification/logs/data-model-appendix.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@ the respective exporter documentation if exact details are required.
* [Zap](#zap)
* [Apache HTTP Server access log](#apache-http-server-access-log)
* [CloudTrail Log Event](#cloudtrail-log-event)
* [Google Cloud Logging](#google-cloud-logging)
* [Elastic Common Schema](#elastic-common-schema)
- [Appendix B: `SeverityNumber` example mappings](#appendix-b-severitynumber-example-mappings)

Expand Down Expand Up @@ -480,24 +479,6 @@ When mapping from the unified model to HEC, we apply this additional mapping:
</tr>
</table>

### Google Cloud Logging

Field | Type | Description | Maps to Unified Model Field
-----------------|--------------------| ------------------------------------------------------- | ---------------------------
timestamp | string | The time the event described by the log entry occurred. | Timestamp
resource | MonitoredResource | The monitored resource that produced this log entry. | Resource
log_name | string | The URL-encoded LOG_ID suffix of the log_name field identifies which log stream this entry belongs to. | Attributes["gcp.log_name"]
json_payload | google.protobuf.Struct | The log entry payload, represented as a structure that is expressed as a JSON object. | Body
proto_payload | google.protobuf.Any | The log entry payload, represented as a protocol buffer. | Body
text_payload | string | The log entry payload, represented as a Unicode string (UTF-8). | Body
severity | LogSeverity | The severity of the log entry. | Severity
trace | string | The trace associated with the log entry, if any. | TraceId
span_id | string | The span ID within the trace associated with the log entry. | SpanId
labels | map<string,string> | A set of user-defined (key, value) data that provides additional information about the log entry. | Attributes
http_request | HttpRequest | The HTTP request associated with the log entry, if any. | Attributes["gcp.http_request"]
trace_sampled | boolean | The sampling decision of the trace associated with the log entry. | TraceFlags.SAMPLED
All other fields | | | Attributes["gcp.*"]

### Elastic Common Schema

<table>
Expand Down
Loading