Status: Experimental
- Log Data Model
This is a data model and semantic conventions that allow to represent logs from various sources: application log files, machine generated events, system logs, etc. Existing log formats can be unambiguously mapped to this data model. Reverse mapping from this data model is also possible to the extent that the target log format has equivalent capabilities.
The purpose of the data model is to have a common understanding of what a log record is, what data needs to be recorded, transferred, stored and interpreted by a logging system.
This proposal defines a data model for Standalone Logs.
The Data Model was designed to satisfy the following requirements:
-
It should be possible to unambiguously map existing log formats to this Data Model. Translating log data from an arbitrary log format to this Data Model and back should ideally result in identical data.
-
Mappings of other log formats to this Data Model should be semantically meaningful. The Data Model must preserve the semantics of particular elements of existing log formats.
-
Translating log data from an arbitrary log format A to this Data Model and then translating from the Data Model to another log format B ideally must result in a meaningful translation of log data that is no worse than a reasonable direct translation from log format A to log format B.
-
It should be possible to efficiently represent the Data Model in concrete implementations that require the data to be stored or transmitted. We primarily care about 2 aspects of efficiency: CPU usage for serialization/deserialization and space requirements in serialized form. This is an indirect requirement that is affected by the specific representation of the Data Model rather than the Data Model itself, but is still useful to keep in mind.
The Data Model aims to successfully represent 3 sorts of logs and events:
-
System Formats. These are logs and events generated by the operating system and over which we have no control - we cannot change the format or affect what information is included (unless the data is generated by an application which we can modify). An example of system format is Syslog.
-
Third-party Applications. These are generated by third-party applications. We may have certain control over what information is included, e.g. customize the format. An example is Apache log file.
-
First-party Applications. These are applications that we develop and we have some control over how the logs and events are generated and what information we include in the logs. We can likely modify the source code of the application if needed.
In this document we refer to types any
and map<string, any>
, defined as
follows.
Value of type any
can be one of the following:
-
A scalar value: number, string or boolean,
-
A byte array,
-
An array (a list) of
any
values, -
A
map<string, any>
.
Value of type map<string, any>
is a map of string keys to any
values. The
keys in the map are unique (duplicate keys are not allowed). The representation
of the map is language-dependent.
Arbitrary deep nesting of values for arrays and maps is allowed (essentially allows to represent an equivalent of a JSON object).
This Data Model defines a logical model for a log record (irrespective of the physical format and encoding of the record). Each record contains 2 kinds of fields:
-
Named top-level fields of specific type and meaning.
-
Fields stored as
map<string, any>
, which can contain arbitrary values of different types. The keys and values for well-known fields follow semantic conventions for key names and possible values that allow all parties that work with the field to have the same interpretation of the data. See references to semantic conventions forResource
andAttributes
fields and examples in Appendix A.
The reasons for having these 2 kinds of fields are:
-
Ability to efficiently represent named top-level fields, which are almost always present (e.g. when using encodings like Protocol Buffers where fields are enumerated but not named on the wire).
-
Ability to enforce types of named fields, which is very useful for compiled languages with type checks.
-
Flexibility to represent less frequent data as
map<string, any>
. This includes well-known data that has standardized semantics as well as arbitrary custom data that the application may want to include in the logs.
When designing this data model we followed the following reasoning to make a decision about when to use a top-level named field:
-
The field needs to be either mandatory for all records or be frequently present in well-known log and event formats (such as
Timestamp
) or is expected to be often present in log records in upcoming logging systems (such asTraceId
). -
The field’s semantics must be the same for all known log and event formats and can be mapped directly and unambiguously to this data model.
Both of the above conditions were required to give the field a place in the top-level structure of the record.
Appendix A contains many examples that show how existing log formats map to the fields defined below. If there are questions about the meaning of the field reviewing the examples may be helpful.
Here is the list of fields in a log record:
Field Name | Description |
---|---|
Timestamp | Time when the event occurred. |
ObservedTimestamp | Time when the event was observed. |
TraceId | Request trace id. |
SpanId | Request span id. |
TraceFlags | W3C trace flag. |
SeverityText | The severity text (also known as log level). |
SeverityNumber | Numerical value of the severity. |
Name | Short low cardinality event type. |
Body | The body of the log record. |
Resource | Describes the source of the log. |
Attributes | Additional information about the event. |
Below is the detailed description of each field.
Type: Timestamp, uint64 nanoseconds since Unix epoch.
Description: Time when the event occurred measured by the origin clock, i.e. the time at the source. This field is optional, it may be missing if the source timestamp is unknown.
Type: Timestamp, uint64 nanoseconds since Unix epoch.
Description: Time when the event was observed by the collection system. For events that originate in OpenTelemetry (e.g. using OpenTelemetry Logging SDK) this timestamp is typically set at the generation time and is equal to Timestamp. For events originating externally and collected by OpenTelemetry (e.g. using Collector) this is the time when OpenTelemetry's code observed the event measured by the clock of the OpenTelemetry code. This field SHOULD be set once the event is observed by OpenTelemetry.
For converting OpenTelemetry log data to formats that support only one timestamp or when receiving OpenTelemetry log data by recipients that support only one timestamp internally the following logic is recommended:
- Use
Timestamp
if it is present, otherwise useObservedTimestamp
.
Type: byte sequence.
Description: Request trace id as defined in W3C Trace Context. Can be set for logs that are part of request processing and have an assigned trace id. This field is optional.
Type: byte sequence.
Description: Span id. Can be set for logs that are part of a particular processing span. If SpanId is present TraceId SHOULD be also present. This field is optional.
Type: byte.
Description: Trace flag as defined in W3C Trace Context specification. At the time of writing the specification defines one flag - the SAMPLED flag. This field is optional.
Type: string.
Description: severity text (also known as log level). This is the original
string representation of the severity as it is known at the source. If this
field is missing and SeverityNumber
is present then the short name that
corresponds to the SeverityNumber
may be used as a substitution. This field is
optional.
Type: number.
Description: numerical value of the severity, normalized to values described in this document. This field is optional.
SeverityNumber
is an integer number. Smaller numerical values correspond to
less severe events (such as debug events), larger numerical values correspond to
more severe events (such as errors and critical events). The following table
defines the meaning of SeverityNumber
value:
SeverityNumber range | Range name | Meaning |
---|---|---|
1-4 | TRACE | A fine-grained debugging event. Typically disabled in default configurations. |
5-8 | DEBUG | A debugging event. |
9-12 | INFO | An informational event. Indicates that an event happened. |
13-16 | WARN | A warning event. Not an error but is likely more important than an informational event. |
17-20 | ERROR | An error event. Something went wrong. |
21-24 | FATAL | A fatal error such as application or system crash. |
Smaller numerical values in each range represent less important (less severe)
events. Larger numerical values in each range represent more important (more
severe) events. For example SeverityNumber=17
describes an error that is less
critical than an error with SeverityNumber=20
.
Mappings from existing logging systems and formats (or source format for
short) must define how severity (or log level) of that particular format
corresponds to SeverityNumber
of this data model based on the meaning given
for each range in the above table.
If the source format has more than one severity that matches a single range in this table then the severities of the source format must be assigned numerical values from that range according to how severe (important) the source severity is.
For example if the source format defines "Error" and "Critical" as error events
and "Critical" is a more important and more severe situation then we can choose
the following SeverityNumber
values for the mapping: "Error"->17,
"Critical"->18.
If the source format has only a single severity that matches the meaning of the range then it is recommended to assign that severity the smallest value of the range.
For example if the source format has an "Informational" log level and no other
log levels with similar meaning then it is recommended to use
SeverityNumber=9
for "Informational".
Source formats that do not define a concept of severity or log level MAY omit
SeverityNumber
and SeverityText
fields. Backend and UI may represent log
records with missing severity information distinctly or may interpret log
records with missing SeverityNumber
and SeverityText
fields as if the
SeverityNumber
was set equal to INFO (numeric value of 9).
When performing a reverse mapping from SeverityNumber
to a specific format
and the SeverityNumber
has no corresponding mapping entry for that format
then it is recommended to choose the target severity that is in the same
severity range and is closest numerically.
For example Zap has only one severity in the INFO range, called "Info". When
doing reverse mapping all SeverityNumber
values in INFO range (numeric 9-12)
will be mapped to Zap’s "Info" level.
If SeverityNumber
is present and has a value of ERROR (numeric 17) or higher
then it is an indication that the log record represents an erroneous situation.
It is up to the reader of this value to make a decision on how to use this fact
(e.g. UIs may display such errors in a different color or have a feature to find
all erroneous log records).
If the log record represents an erroneous event and the source format does not
define a severity or log level concept then it is recommended to set
SeverityNumber
to ERROR (numeric 17) during the mapping process. If the log
record represents a non-erroneous event the SeverityNumber
field may be
omitted or may be set to any numeric value less than ERROR (numeric 17). The
recommended value in this case is INFO (numeric 9). See
Appendix B for more mapping
examples.
The following table defines the recommended short name for each
SeverityNumber
value. The short name can be used for example for representing
the SeverityNumber
in the UI:
SeverityNumber | Short Name |
---|---|
1 | TRACE |
2 | TRACE2 |
3 | TRACE3 |
4 | TRACE4 |
5 | DEBUG |
6 | DEBUG2 |
7 | DEBUG3 |
8 | DEBUG4 |
9 | INFO |
10 | INFO2 |
11 | INFO3 |
12 | INFO4 |
13 | WARN |
14 | WARN2 |
15 | WARN3 |
16 | WARN4 |
17 | ERROR |
18 | ERROR2 |
19 | ERROR3 |
20 | ERROR4 |
21 | FATAL |
22 | FATAL2 |
23 | FATAL3 |
24 | FATAL4 |
When an individual log record is displayed it is recommended to show both
SeverityText
and SeverityNumber
values. A recommended combined string in
this case begins with the short name followed by SeverityText
in parenthesis.
For example "Informational" Syslog record will be displayed as INFO
(Informational). When for a particular log record the SeverityNumber
is
defined but the SeverityText
is missing it is recommended to only show the
short name, e.g. INFO.
When drop down lists (or other UI elements that are intended to represent the possible set of values) are used for representing the severity it is preferable to display the short name in such UI elements.
For example a dropdown list of severities that allows filtering log records by
severities is likely to be more usable if it contains the short names of
SeverityNumber
(and thus has a limited upper bound of elements) compared to a
dropdown list, which lists all distinct SeverityText
values that are known to
the system (which can be a large number of elements, often differing only in
capitalization or abbreviated, e.g. "Info" vs "Information").
In the contexts where severity participates in less-than / greater-than
comparisons SeverityNumber
field should be used. SeverityNumber
can be
compared to another SeverityNumber
or to numbers in the 1..24 range (or to the
corresponding short names).
Type: string.
Description: Short low cardinality event type that does not contain varying
parts. Name
describes what happened (e.g. "ProcessStarted"). Recommended to be
no longer than 50 characters. Typically used for filtering and grouping purposes
in backends. This field is optional.
Type: any.
Description: A value containing the body of the log record (see the description
of any
type above). Can be for example a human-readable string message
(including multi-line) describing the event in a free form or it can be a
structured data composed of arrays and maps of other values. First-party
Applications SHOULD use a string message. However, a structured body may be
necessary to preserve the semantics of some existing log formats. Can vary for
each occurrence of the event coming from the same source. This field is optional.
Type: map<string, any>
.
Description: Describes the source of the log, aka
resource. Multiple occurrences of events coming from
the same event source can happen across time and they all have the same value of
Resource
. Can contain for example information about the application that emits
the record or about the infrastructure where the application runs. Data formats
that represent this data model may be designed in a manner that allows the
Resource
field to be recorded only once per batch of log records that come
from the same source. SHOULD follow OpenTelemetry
semantic conventions for Resources.
This field is optional.
Type: map<string, any>
.
Description: Additional information about the specific event occurrence. Unlike
the Resource
field, which is fixed for a particular source, Attributes
can
vary for each occurrence of the event coming from the same source. Can contain
information about the request context (other than TraceId/SpanId). SHOULD follow
OpenTelemetry semantic conventions for Log Attributes or
semantic conventions for Span Attributes.
This field is optional.
Additional information about errors and/or exceptions that are associated with
a log record MAY be included in the structured data in the Attributes
section
of the record.
If included, they MUST follow the OpenTelemetry
semantic conventions for exception-related attributes.
Below are examples that show one possible representation of log records in JSON. These are just examples to help understand the data model. Don’t treat the examples as the way to represent this data model in JSON.
This document does not define the actual encoding and format of the log record representation. Format definitions will be done in separate OTEPs (e.g. the log records may be represented as msgpack, JSON, Protocol Buffer messages, etc).
Example 1
{
"Timestamp": "1586960586000000000",
"Attributes": {
"http.status_code": 500,
"http.url": "http://example.com",
"my.custom.application.tag": "hello",
},
"Resource": {
"service.name": "donut_shop",
"service.version": "2.0.0",
"k8s.pod.uid": "1138528c-c36e-11e9-a1a7-42010a800198",
},
"TraceId": "f4dbb3edd765f620", // this is a byte sequence
// (hex-encoded in JSON)
"SpanId": "43222c2d51a7abe3",
"SeverityText": "INFO",
"SeverityNumber": 9,
"Body": "20200415T072306-0700 INFO I like donuts"
}
Example 2
{
"Timestamp": "1586960586000000000",
...
"Body": {
"i": "am",
"an": "event",
"of": {
"some": "complexity"
}
}
}
Example 3
{
"Timestamp": "1586960586000000000",
"Attributes":{
"http.scheme":"https",
"http.host":"donut.mycie.com",
"http.target":"/order",
"http.method":"post",
"http.status_code":500,
"http.flavor":"1.1",
"http.user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36",
}
}
This section contains examples of mapping of other events and logs formats to this data model.
Property | Type | Description | Maps to Unified Model Field |
TIMESTAMP | Timestamp | Time when an event occurred measured by the origin clock. | Timestamp |
SEVERITY | enum | Defines the importance of the event. Example: `Debug` | Severity |
FACILITY | enum | Describes where the event originated. A predefined list of Unix processes. Part of event source identity. Example: `mail system` | Attributes["syslog.facility"] |
VERSION | number | Meta: protocol version, orthogonal to the event. | Attributes["syslog.version"] |
HOSTNAME | string | Describes the location where the event originated. Possible values are FQDN, IP address, etc. | Resource["host.hostname"] |
APP-NAME | string | User-defined app name. Part of event source identity. | Resource["service.name"] |
PROCID | string | Not well defined. May be used as a meta field for protocol operation purposes or may be part of event source identity. | Attributes["syslog.procid"] |
MSGID | string | Defines the type of the event. Part of event source identity. Example: "TCPIN" | Name |
STRUCTURED-DATA | array of maps of string to string | A variety of use cases depending on the SDID: Can describe event source identity Can include data that describes particular occurrence of the event. Can be meta-information, e.g. quality of timestamp value. | SDID origin.swVersion map to Resource["service.version"]
SDID origin.ip map to attribute[net.host.ip"] Rest of SDIDs -> Attributes["syslog.*"] |
MSG | string | Free-form text message about the event. Typically human readable. | Body |
Property | Type | Description | Maps to Unified Model Field |
TimeCreated | Timestamp | The time stamp that identifies when the event was logged. | Timestamp |
Level | enum | Contains the severity level of the event. | Severity |
Computer | string | The name of the computer on which the event occurred. | Resource["host.hostname"] |
EventID | uint | The identifier that the provider used to identify the event. | Name |
Message | string | The message string. | Body |
Rest of the fields. | any | All other fields in the event. | Attributes["winlog.*"] |
Field | Type | Description | Maps to Unified Model Field |
Timestamp | Timestamp | Time when the event occurred measured by the origin clock. | Timestamp |
EventType | string | Short machine understandable string describing the event type. SignalFx specific concept. Non-namespaced. Example: k8s Event Reason field. | Name |
Category | enum | Describes where the event originated and why. SignalFx specific concept. Example: AGENT. | Attributes["com.splunk.signalfx.event_category"] |
Dimensions | map<string, string> | Helps to define the identity of the event source together with EventType and Category. Multiple occurrences of events coming from the same event source can happen across time and they all have the value of Dimensions. | Resource |
Properties | map<string, any> | Additional information about the specific event occurrence. Unlike Dimensions which are fixed for a particular event source, Properties can have different values for each occurrence of the event coming from the same event source. | Attributes |
We apply this mapping from HEC to the unified model:
Field | Type | Description | Maps to Unified Model Field |
time | numeric, string | The event time in epoch time format, in seconds. | Timestamp |
host | string | The host value to assign to the event data. This is typically the host name of the client that you are sending data from. | Resource["host.name"] |
source | string | The source value to assign to the event data. For example, if you are sending data from an app you are developing, you could set this key to the name of the app. | Resource["com.splunk.source"] |
sourcetype | string | The sourcetype value to assign to the event data. | Resource["com.splunk.sourcetype"] |
event | any | The JSON representation of the raw body of the event. It can be a string, number, string array, number array, JSON object, or a JSON array. | Body |
fields | map<string, any> | Specifies a JSON object that contains explicit custom fields. | Attributes |
index | string | The name of the index by which the event data is to be indexed. The index you specify here must be within the list of allowed indexes if the token has the indexes parameter set. | Attributes["com.splunk.index"] |
When mapping from the unified model to HEC, we apply this additional mapping:
Unified model element | Type | Description | Maps to HEC |
SeverityText | string | The severity of the event as a human-readable string. | fields['otel.log.severity.text'] |
SeverityNumber | string | The severity of the event as a number. | fields['otel.log.severity.number'] |
Name | string | Short event identifier that does not contain varying parts. | fields['otel.log.name'] |
TraceId | string | Request trace id. | fields['trace_id'] |
SpanId | string | Request span id. | fields['span_id'] |
TraceFlags | string | W3C trace flags. | fields['trace_flags'] |
Field | Type | Description | Maps to Unified Model Field |
Instant | Timestamp | Time when an event occurred measured by the origin clock. | Timestamp |
Level | enum | Log level. | Severity |
Message | string | Human readable message. | Body |
All other fields | any | Structured data. | Attributes |
Field | Type | Description | Maps to Unified Model Field |
ts | Timestamp | Time when an event occurred measured by the origin clock. | Timestamp |
level | enum | Logging level. | Severity |
caller | string | Calling function's filename and line number. | Attributes, key=TBD |
msg | string | Human readable message. | Body |
All other fields | any | Structured data. | Attributes |
Field | Type | Description | Maps to Unified Model Field |
%t | Timestamp | Time when an event occurred measured by the origin clock. | Timestamp |
%a | string | Client IP | Attributes["net.peer.ip"] |
%A | string | Server IP | Attributes["net.host.ip"] |
%h | string | Remote hostname. | Attributes["net.peer.name"] |
%m | string | The request method. | Attributes["http.method"] |
%v,%p,%U,%q | string | Multiple fields that can be composed into URL. | Attributes["http.url"] |
%>s | string | Response status. | Attributes["http.status_code"] |
All other fields | any | Structured data. | Attributes, key=TBD |
Field | Type | Description | Maps to Unified Model Field |
eventTime | string | The date and time the request was made, in coordinated universal time (UTC). | Timestamp |
eventSource | string | The service that the request was made to. This name is typically a short form of the service name without spaces plus .amazonaws.com. | Resource["service.name"]? |
awsRegion | string | The AWS region that the request was made to, such as us-east-2. | Resource["cloud.region"] |
sourceIPAddress | string | The IP address that the request was made from. | Resource["net.peer.ip"] or Resource["net.host.ip"]? TBD |
errorCode | string | The AWS service error if the request returns an error. | Name |
errorMessage | string | If the request returns an error, the description of the error. | Body |
All other fields | * | Attributes["cloudtrail.*"] |
Field | Type | Description | Maps to Unified Model Field |
---|---|---|---|
timestamp | string | The time the event described by the log entry occurred. | Timestamp |
resource | MonitoredResource | The monitored resource that produced this log entry. | Resource |
log_name | string | The URL-encoded LOG_ID suffix of the log_name field identifies which log stream this entry belongs to. | Attributes["com.google.log_name"] |
json_payload | google.protobuf.Struct | The log entry payload, represented as a structure that is expressed as a JSON object. | Body |
proto_payload | google.protobuf.Any | The log entry payload, represented as a protocol buffer. | Body |
text_payload | string | The log entry payload, represented as a Unicode string (UTF-8). | Body |
severity | LogSeverity | The severity of the log entry. | Severity |
trace | string | The trace associated with the log entry, if any. | TraceId |
span_id | string | The span ID within the trace associated with the log entry. | SpanId |
labels | map<string,string> | A set of user-defined (key, value) data that provides additional information about the log entry. | Attributes |
http_request | HttpRequest | The HTTP request associated with the log entry, if any. | Attributes["google.httpRequest"] |
All other fields | Attributes["google.*"] |
Field | Type | Description | Maps to Unified Model Field |
@timestamp | datetime | Time the event was recorded | Timestamp |
message | string | Any type of message | Body |
labels | key/value | Arbitrary labels related to the event | Attributes[*] |
tags | array of string | List of values related to the event | ? |
trace.id | string | Trace ID | TraceId |
span.id* | string | Span ID | SpanId |
agent.ephemeral_id | string | Ephemeral ID created by agent | **Resource |
agent.id | string | Unique identifier of this agent | **Resource |
agent.name | string | Name given to the agent | Resource["telemetry.sdk.name"] |
agent.type | string | Type of agent | Resource["telemetry.sdk.language"] |
agent.version | string | Version of agent | Resource["telemetry.sdk.version"] |
source.ip, client.ip | string | The IP address that the request was made from. | Attributes["net.peer.ip"] or Attributes["net.host.ip"] |
cloud.account.id | string | ID of the account in the given cloud | Resource["cloud.account.id"] |
cloud.availability_zone | string | Availability zone in which this host is running. | Resource["cloud.zone"] |
cloud.instance.id | string | Instance ID of the host machine. | **Resource |
cloud.instance.name | string | Instance name of the host machine. | **Resource |
cloud.machine.type | string | Machine type of the host machine. | **Resource |
cloud.provider | string | Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. | Resource["cloud.provider"] |
cloud.region | string | Region in which this host is running. | Resource["cloud.region"] |
cloud.image.id* | string | Resource["host.image.name"] | |
container.id | string | Unique container id | Resource["container.id"] |
container.image.name | string | Name of the image the container was built on. | Resource["container.image.name"] |
container.image.tag | Array of string | Container image tags. | **Resource |
container.labels | key/value | Image labels. | Attributes[*] |
container.name | string | Container name. | Resource["container.name"] |
container.runtime | string | Runtime managing this container. Example: "docker" | **Resource |
destination.address | string | Destination address for the event | Attributes["destination.address"] |
error.code | string | Error code describing the error. | Attributes["error.code"] |
error.id | string | Unique identifier for the error. | Attributes["error.id"] |
error.message | string | Error message. | Attributes["error.message"] |
error.stack_trace | string | The stack trace of this error in plain text. | Attributes["error.stack_trace] |
host.architecture | string | Operating system architecture | **Resource |
host.domain | string | Name of the domain of which the host is a member.
For example, on Windows this could be the host’s Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host’s LDAP provider. |
**Resource |
host.hostname | string | Hostname of the host.
It normally contains what the hostname command returns on the host machine. |
Resource["host.hostname"] |
host.id | string | Unique host id. | Resource["host.id"] |
host.ip | Array of string | Host IP | Resource["host.ip"] |
host.mac | array of string | MAC addresses of the host | Resource["host.mac"] |
host.name | string | Name of the host.
It may contain what hostname returns on Unix systems, the fully qualified, or a name specified by the user. |
Resource["host.name"] |
host.type | string | Type of host. | Resource["host.type"] |
host.uptime | string | Seconds the host has been up. | ? |
service.ephemeral_id | string | Ephemeral identifier of this service | **Resource |
service.id | string | Unique identifier of the running service. If the service is comprised of many nodes, the service.id should be the same for all nodes. | **Resource |
service.name | string | Name of the service data is collected from. | Resource["service.name"] |
service.node.name | string | Specific node serving that service | Resource["service.instance.id"] |
service.state | string | Current state of the service. | Attributes["service.state"] |
service.type | string | The type of the service data is collected from. | **Resource |
service.version | string | Version of the service the data was collected from. | Resource["service.version"] |
* Not yet formalized into ECS.
** A resource that doesn’t exist in the OpenTelemetry resource semantic convention.
This is a selection of the most relevant fields. See for the full reference for an exhaustive list.
Syslog | WinEvtLog | Log4j | Zap | java.util.logging | SeverityNumber |
---|---|---|---|---|---|
TRACE | FINEST | TRACE | |||
Debug | Verbose | DEBUG | Debug | FINER | DEBUG |
FINE | DEBUG2 | ||||
CONFIG | DEBUG3 | ||||
Informational | Information | INFO | Info | INFO | INFO |
Notice | INFO2 | ||||
Warning | Warning | WARN | Warn | WARNING | WARN |
Error | Error | ERROR | Error | SEVERE | ERROR |
Critical | Critical | Dpanic | ERROR2 | ||
Alert | Panic | ERROR3 | |||
Emergency | FATAL | Fatal | FATAL |
-
Log Data Model OTEP 0097