PagerDuty Blog

Outage Post-Mortem – March 25, 2014

On March 25th, PagerDuty suffered intermittent service degradation over a three hour span, which affected our customers in a variety of ways. During the service degradation, PagerDuty was unable to accept 2.5% of attempts to send events to our integrations endpoints and 11.0% of notifications experienced a delay in delivery – instead of arriving within five minutes of the triggering event, they were sent up to twenty-five minutes after the triggering event.

We take reliability seriously, an outage of this magnitude as well as the impact it causes to our customers is unacceptable. We apologize to all customers that were affected and are working to ensure the underlying causes never affect PagerDuty customers again.

What Happened?

Much of the PagerDuty notifications pipeline is built around Cassandra, a distributed NoSQL datastore. We use Cassandra for its excellent durability and availability characteristics, and it works extremely well for us. As we have moved more of the notifications pipeline over to use Cassandra the workload being applied to our Cassandra cluster has increased, including both steady-state load and a variety of bursty batch-style scheduled jobs.

On March 25, the Cassandra cluster was subjected to a higher than typical workload from several separate back-end services, but was still within capacity. However, some scheduled jobs then applied significant bursty workloads against the Cassandra cluster, which put multiple Cassandra cluster nodes into an overload state. The overloaded Cassandra nodes reacted by canceling various queued up requests, resulting in internal clients experiencing processing failures.

Request failures are not unexpected, many of our internal clients have retry-upon-fail logic to power through transient failures. However, these retries were counterproductive in the face of Cassandra overload, with many of the cancelled requests getting immediately retried – causing the overload period to extend longer than necessary as the retries subsided over time.

In summary, significant fluctuations in our Cassandra workload surpassed the cluster’s processing capacity, and failures occurred as a result. In addition, client retry logic resulted in the workload taking much longer to dissipate, extending the service interruption period.

What We Are Doing About This

Even with excellent monitoring and alerting in place, bursty workloads are dangerous: by the time their impact can be measured, the damage may already be done. Instead, an overall workload that has low variability should be the goal. With that in mind, we have re-balanced our scheduled jobs so that they are temporally distributed to minimize their overlap. In addition, we are flattening the intensity of our scheduled jobs so that each has a much more consistent and predictable load, albeit applied over a longer period of time.

Also, although our datasets are logically separated already, having a single shared Cassandra cluster for the entire notifications pipeline is still problematic. In addition to the combined workload from multiple systems being hard to model and accurately predict, it also means that when overload occurs it can impact multiple systems. To reduce this overload ripple effect, we will be isolating related systems to use separate Cassandra clusters, eliminating the ability for systems to interfere with each other via Cassandra.

Our failure detection and retry policies also need rebalancing, so that they better take into account overload scenarios and permit back-off and load dissipation.

Finally, we need to extend our failure testing regime to include overload scenarios, both within our Failure Friday sessions and beyond.

We take every customer-affecting outage seriously, and will be taking the above steps (and several more) to make PagerDuty even more reliable. If you have any questions or comments, please let us know.