Structured Logging is a library that complements SLF4J and its MDC to
- augment log messages with information about the context in which they happen
- manage
- the lifetime of that context information
- serialization of that context information
Imagine that ...
- you do a
log.info("A new order has come in.");
and automatically get the order object attached to that Message - the order has a
total
,customerId
and anid
field
Now you can
- trace the order via its
id
and see what log messages it produced. - alert based on
customerId
field because the same customer suspiciously placed 50 orders within one day - get an overview of the median
total
of orders, or the distribution oftotal
s in the visualization tool of your choice
all without writing more code, because you already have all information you need in the context of your log messages.
If there are log messages that happen in the context of an order, you may want to attach information about that order to these log messages.
mdc(incomingOrder, () -> {
log.info("A new order has come in.");
if(isValid(incomingOrder)){
prepareForDelivery(incomingOrder);
}
});
The incomingOrder
will be attached to the log messages generated in the supplied lambda, including
- the message from
log.info("A new order has come in.")
. - all messages logged by
prepareForDelivery(...)
,isValid(...)
- all messages logged by methods called indirectly by the methods above
Here's what a log message with an incomingOrder
looks like in Kibana:
Table of Contents
This approach has various advantages over both plain logging as well as using MDC directly.
- If you log context information, you can easily trace the order from the example above by filtering
by
incomingOrder.id : 1234
- Searching is easier if the log message itself does not vary that much (just search
for
message : "A new order has come in."
) - Because the type of fields can be inferred, you can for example search by
incomingOrder.paymentDate > 2020-01-01
- You don't have to remember to attach context information to every log message: Context information is attached automatically while the context is active.
- You can do alerting and monitoring based on specific MDC values. Want to know the summary monetary value of
those
incomingOrder
s in the example above? You can now do that.
Structured Logging adds crucial features missing from plain MDC, even when using MDC's putCloseable:
- Proper serialization of complex objects and their fields is taken care of. MDC itself does not support a hierarchical structure, although most log processors like logstash do support it.
- MDC is properly reset after the
try
block if it contained another value before. - If a task in another thread is started, context information is retained because it is still relevant. This is solved by providing a custom Task Decorator (see below).
For creating logs:
- You need to use SLF4J as your logging facade.
- You need to use logback as your logging implementation.
- Optionally, you can use Spring which uses both of these per default and already lets you register Task Decorators. This library does not depend on Spring, however. Other frameworks (or no framework at all) work just as well.
For consuming logs:
- Works with log consumers that
- can digest JSON logs
- supports hierarchical data in a log messgage
- For example:
- ELK Stack or Datadog are well-supported
- Greylog can consume JSON logs, but does not support hierarchical data
If you use maven, add this to your pom.xml:
<dependency>
<groupId>de.dm.infrastructure</groupId>
<artifactId>structured-logging</artifactId>
<version>3.0.0</version>
</dependency>
To log the whole log message including the object you put into MDC as JSON, you need to configure Logback to use the LogstashEncoder and StructuredMdcJsonProvider.
The following configuration can be used as an example:
<configuration>
<include resource="org/springframework/boot/logging/logback/defaults.xml"/>
<include resource="org/springframework/boot/logging/logback/console-appender.xml"/>
<appender name="JSON_FILE" class="ch.qos.logback.core.FileAppender">
<!-- use LogstashEncoder to log as JSON -->
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<!-- use StructuredMdcJsonProvider to log structured data as json in MDC fields -->
<provider class="de.dm.prom.structuredlogging.StructuredMdcJsonProvider"/>
</encoder>
<file>${LOG_LOCATION}</file>
</appender>
<root level="INFO">
<appender-ref ref="JSON_FILE"/>
<appender-ref ref="CONSOLE"/>
</root>
</configuration>
use MdcContext.mdc(...) to define the scope in which context information should be set:
import static de.dm.prom.structuredlogging.MdcContext.mdc;
...
log.info("a message without context");
TimeMachine timeMachine=TimeMachineBuilder.build();
mdc(timeMachine, () -> {
log.info("time machine found. Trying to use it");
timeMachine.setFluxFactor(42);
MdcContext.update(timeMachine);
travelSomewhenWith(timeMachine);
});
log.info("another message without context");
You can also return values:
var currentTime = mdc(timeMachine, () -> {
timeMachine.setFluxFactor(42);
return travelSomewhenWith(timeMachine);
});
log.info("another message without context");
When you start another thread, MDC will be empty although the things happening there usually happen in the same context.
So if you want to retain the context information in the new thread, there's a decorator for your Runnable
that solves this problem by copying the current MDC contents to the new Thread in which your Runnable
runs and resets it when it's done:
Runnable decoratedRunnable = MdcTaskDecorator.decorate(
() -> doSomethingThatLogs(),
OverwriteStrategy.LOG_OVERWRITE
);
If you use Spring and start the other thread by calling an @Async
method, use the SpringMdcTaskDecorator
for decorating the threads. You just need to configure an Executor:
@Configuration
public class AsyncThreadConfiguration {
...
@Bean
public Executor asyncExecutor() {
ThreadPoolTaskExecutor executor = new ThreadPoolTaskExecutor();
executor.setTaskDecorator(new SpringMdcTaskDecorator(OverwriteStrategy.LOG_OVERWRITE));
executor.initialize();
return executor;
}
...
}
If you already configured Executors for other reasons, just add .setTaskDecorator(new MdcTaskDecorator(...))
.
There are three OverwriteStrategy
s that tell the decorator what to do when it encounters MDC contents in the target thread:
- If you expect MDC content in the target thread, use
JUST_OVERWRITE
. - If you don't expect MDC content in the target thread, use
LOG_OVERWRITE
orPREVENT_OVERWRITE
.
Datadog's APM, for example, does give new threads an MDC context for tracing, so in that case use JUST_OVERWRITE
so you don't get inappropriate warnings.
If you use your logs for monitoring, alerting or visualization, they are a functional requirement and should be tested.
The tests for structured-logging itself are implemented with log-capture, which can be easily used from JUnit or Cucumber. Give it a try.
There are three ways to define the MDC key for the objects you put into the context:
1. Use the default (The class's shortName, as seen in Step 3)
2. Define it manually
mdc("de_lorean", timeMachine, () -> {
...
MdcContext.update("de_lorean", timeMachine);
...
});
3. Provide an MdcKeySupplier
This is the recommended approach if you want to make sure that the same MDC key is always used for the same type, even if it differs from the type's shortName or the type is renamed.
First, define the MDC key for the type:
// must be public and have a public, non-parameterized constructor
public final class TimeMachineKey implements MdcKeySupplier<TimeMachine> {
@Override
public String getMdcKey() {
return "de_lorean";
}
}
Then use it:
//MdcContext.of(...) is defined so that you can only use TimeMachineKey with a TimeMachine.
mdc(TimeMachineKey.class, timeMachine, () -> {
...
MdcContext.update(TimeMachineKey.class, timeMachine);
...
});
You can also use try-with-resources to manage your MDC context if you prefer that or need to. For example, the following code
- would not be possible with the callback API because
availableSeats
is not effectively final - would be inconvenient, because there are multiple values in the MDC context, which would require nested callbacks
- may not be possible with the callback API if
seat.isAvaliable()
can throw multiple checked Exceptions. The callback API works with one checked Exception, though.
int availableSeats = 0;
seats.forEach(seat -> {
try (var s = MdcContext.of(seat); var c = MdcContext.of(cinema)) {
if (seat.isAvailable()) { // isAvailable() also logs
availableSeats++;
}
}
});
JSON serialization is done with Jackson, so you can use the com.fasterxml.jackson.annotation
annotations to
fine-tune serialization. Here's an example that excludes a field from serialization:
public class TimeMachine {
@JsonIgnore
public getIrrelevantProperty() {
...
}
}
If you want to use your own ObjectMapper for serialization, you can exchange the used default ObjectMapper by calling setGlobalObjectMapper
. To reset to the default ObjectMapper, you can use resetGlobalObjectMapper
at any time.
- Breaking Change: Structured Logging now requires Java 17
- New Feature: Convenience methods as an alternative to try-with-resources
- updated dependencies
- New Feature: Made the used ObjectMapper configurable
- updated dependencies
- New Feature: Added correct serialization of remaining java.time types (Year, YearMonth and MonthDay)
- New Feature: Added correct serialization of Duration
- Bugfix: The Task Decorator threw a NullPointerException when resetting MDC contents if the target thread's MDC was empty
- New Feature: The Task Decorator now allows setting an OverwriteStrategy
- Added proper serialization for missing JSR310 type LocalTime
- Breaking Changes
- Renamed MdcContextId to MdcKeySupplier to make its purpose clear
- migration: change your code references from
MdcContextId
toMdcKeySupplier
- migration: change your code references from
- Removed generic type information from MdcContext because it is not necessary anymore
- migration: Change every use of
MdcContext<Foo, Bar>
to onlyMdcContext
- migration: Change every use of
- The Task decorator to be used with Spring is now named
SpringMdcTaskDecorator
instead ofMdcTaskDecorator
- migration: use
SpringMdcTaskDecorator
when creating your Executor bean
- migration: use
- Renamed MdcContextId to MdcKeySupplier to make its purpose clear
- New Features
- Added convenience methods for creating MdcContexts without having an MdcKeySupplier.
- MdcContext can now be updated
- Runnables can now directly be decorated via
MdcTaskDecorator
to retain MDC information even when not using Spring.
- Fixes
- Do not create ObjectMapper every time an MdcContext is created.
- Do not overwrite MDC information in threads that already have a context in MdcTaskDecorator - a WARNing is logged instead because this indicates incorrect usage
- Added proper serialization for further JSR310 types. Now properly serializes
- Instant (new)
- LocalDate (new)
- LocalDateTime
- OffsetDateTime
- OffsetTime (new)
- Period (new)
- ZonedDateTime (new)
Your dependencies include Spring and Lombok. What can I do if I don't want to use Spring and Lombok?
Actually, Spring and Lombok are not included if you just want to use Structured Logging. Spring and Lombok are only needed to build/develop Structured Logging.
So if you add Structured Logging to your dependencies, there are only three dependencies that will be added to your project transitively:
- slf4j which is the logging facade you probably use already
- logback as a concrete implementation of the slf4j api
- logback-logstash-encoder needed to write logs as json
Is there anything to watch out for when logging that many fields?
You should be careful when changing the type of a field that you put into your context. If the indexed type does not match the type that is expected, your log messages may be thrown away. We've seen this happen in ELK.
For example: Your order.id was numeric, now it's alphanumeric, but it's still indexed as a number. In that case, you have two options:
- contact your ELK admin and tell her about it so she can start re-index
- rename the field (either order or id) so that the types don't collide
Note that this also applies if just you use MDC directly. However, with Structured Logging it is more likely to happen because you are usually logging more fields.
Since I'm logging that much now, I also want to test my logging. How can I do that?
Actually, no one has asked that so far. We just want to to advertise log-capture, which solves this problem.