-
Notifications
You must be signed in to change notification settings - Fork 64
secondary sampling
Secondary sampling means participants choose trace data from a request even when it is not sampled with B3. This is particularly important with customer support or triage in large deployments. For example:
- I want to see 10% of gateway requests with a path expression
/play/*
. However, I only want data from the gateway and playback services. - I want to see 15% of
authUser()
gRPC requests, but only data between the auth service and its cache.
This design allows multiple participants to perform investigations that possibly overlap, while only incurring overhead at most once. For example, if B3 is sampled, all investigations reuse its data. If B3 is not, investigations only record data at trigger points in the request.
The fundamentals of this design are the following:
- A function of request creates zero or many "sampling keys". This function trigger anywhere in the service graph.
- A header
sampling-keys
is co-propagated with B3 including these labels. - A delimited span tag
sampled-keys
is added to all recorded spans.sampled-keys
is a subset ofsampling-keys
, relevant for this hop. Notably, it may include a keyword 'b3' if the span was B3 sampled. - A Zipkin collector routes data to relevant participants by parsing the
sampled-keys
tag.
Typically, a Zipkin trace is sampled up front and before any activity is recorded. B3 propagation conveys the sampling decision downwards consistently. In other words, a "no" the decision never changes from unsampled to sampled on the same request.
Many large sites use random sampling, to ensure a small percentage <5% result in a trace. While nuanced, it is important to note that even when random sampling, sites often have blacklists which prevent instrumentation from triggering at all. A prime example are health checks which are usually never recorded even if everything else is randomly sampled.
Many conflate Zipkin and B3 with pure random sampling, because initially that was the only choice. However times have changed. Sites often use conditions such as an http request to choose data. For example, record 100% of traffic at a specific endpoint (while randomly sampling other traffic). Choosing what to record based on context including request and node-specific state is called conditional sampling.
In either case of random or conditional sampling, there's other guards as well. For example, decisions are subject to a rate-limit. For example, up to 1000 traces per second for this endpoint means effectively 100% until/unless that cap is reached. Further concepts are available in William Louth's Scaling Distributed Tracing talk.
The important takeaway is that existing Zipkin sites select traces based on criteria visible at the beginning of the request. Once selected, this data is expected to be recorded into Zipkin consistently even if the request crosses 300 services.
For the rest of this document, we'll call this up front, consistent decision the "primary sampling decision". We'll understand that this primary decision is propagated in-process in a trace context and across nodes using B3 propagation.