Skip to content

Instantly share code, notes, and snippets.

@jonbrouse
Last active April 10, 2019 17:07
Show Gist options
  • Save jonbrouse/a4445b3154eb33a18da6a68e3adc2480 to your computer and use it in GitHub Desktop.
Save jonbrouse/a4445b3154eb33a18da6a68e3adc2480 to your computer and use it in GitHub Desktop.

Indicators in Practice

What Do You and Your Users Care About?

  • Too many and they become noise, too few may leave significant behaviors of your system unexamined

Broad Categories

  • User-facing serving systems
    • Availability - Could we respond to the request?
    • Latency - How long did it take to respond?
    • Throughput - How many requests could be handled?
  • Storage Systems
    • Latency - How long did it take to read or write?
    • Availability - Can we access the data on demand?
    • Durability - Is the data still there when we need it?
  • Big data systems (data processing pipelines)
    • Throughput - How much data is being processed?
    • End-to-end latency - How long does it take the data to progress from ingestion to completion?
  • All systems correctness
    • Was the right answer returned?
    • Was the right data retrieved?
    • Was the right analysis done?
    • Correctness is important to track as an indicator of system health, even though it’s often a property of the data in the system rather than the infrastructure per se, and so usually not an SRE responsibility to meet.

Aggregation

  • Using averages can hide long tails or large instantaneous loads
  • Use distributions/percentiles
    • High-order percentile (99th or 99.9th) shows you a plausible worst-case value
    • Using median (50th percentile) emphasizes the typical case

User studies have shown that people typically prefer a slightly slower system to one with high variance in response time, so some SRE teams focus only on high percentile values, on the grounds that if the 99.9th percentile behavior is good, then the typical experience is certainly going to be.

Standardize Indicators

Standardize on common definitions for SLIs so that you don’t have to reason about them from first principles each time. Any feature that conforms to the standard definition templates can be omitted from the specification of an individual SLI.

  • Aggregation intervals: "Averaged over 1 minute"
  • Aggregation regions: "All the tasks in a cluster"
  • How frequently measurements are made: "Every 10 seconds"
  • Which requests are included: “HTTP GETs from black-box monitoring jobs”
  • How the data is acquired: “Through our monitoring, measured at the server”
  • Data-access latency: “Time to last byte”

To save effort, build a set of reusable SLI templates for each common metric; these also make it simpler for everyone to understand what a specific SLI means.

Golden Signals

The four golden signals of monitoring are latency, traffic, errors, and saturation. If you can only measure four metrics of your user-facing system, focus on these four. If you measure all four golden signals and page a human when one signal is problematic (or, in the case of saturation, nearly problematic), your service will be at least decently covered by monitoring.

Latency

The time it takes to service a request. It’s important to distinguish between the latency of successful requests and the latency of failed requests. For example, an HTTP 500 error triggered due to loss of connection to a database or other critical backend might be served very quickly; however, as an HTTP 500 error indicates a failed request, factoring 500s into your overall latency might result in misleading calculations. On the other hand, a slow error is even worse than a fast error! Therefore, it’s important to track error latency, as opposed to just filtering out errors.

Traffic

A measure of how much demand is being placed on your system, measured in a high-level system-specific metric. For a web service, this measurement is usually HTTP requests per second, perhaps broken out by the nature of the requests (e.g., static versus dynamic content). For an audio streaming system, this measurement might focus on network I/O rate or concurrent sessions. For a key-value storage system, this measurement might be transactions and retrievals per second.

Errors

The rate of requests that fail, either explicitly (e.g., HTTP 500s), implicitly (for example, an HTTP 200 success response, but coupled with the wrong content), or by policy (for example, "If you committed to one-second response times, any request over one second is an error"). Where protocol response codes are insufficient to express all failure conditions, secondary (internal) protocols may be necessary to track partial failure modes. Monitoring these cases can be drastically different: catching HTTP 500s at your load balancer can do a decent job of catching all completely failed requests, while only end-to-end system tests can detect that you’re serving the wrong content.

Saturation

How "full" your service is. A measure of your system fraction, emphasizing the resources that are most constrained (e.g., in a memory-constrained system, show memory; in an I/O-constrained system, show I/O). Note that many systems degrade in performance before they achieve 100% utilization, so having a utilization target is essential.

In complex systems, saturation can be supplemented with higher-level load measurement: can your service properly handle double the traffic, handle only 10% more traffic, or handle even less traffic than it currently receives? For very simple services that have no parameters that alter the complexity of the request (e.g., "Give me a nonce" or "I need a globally unique monotonic integer") that rarely change configuration, a static value from a load test might be adequate. As discussed in the previous paragraph, however, most services need to use indirect signals like CPU utilization or network bandwidth that have a known upper bound. Latency increases are often a leading indicator of saturation. Measuring your 99th percentile response time over some small window (e.g., one minute) can give a very early signal of saturation.

Finally, saturation is also concerned with predictions of impending saturation, such as "It looks like your database will fill its hard drive in 4 hours."

Source

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment