Posts tagged with Chaos Engineering

When we build applications, one of our aims should be making them resilient. A good application can sustain its operations in the face of different kinds of failure. The final tests for this don't begin until the application is deployed into a production environment, after which we cannot predict its trials or their results. A new approach is to change our perspective on errors in software systems by not preventing them all the time, but triggering the faults in some controlled situation, learning from the behavior of the application, and finally improving its resilience. To this end, we will design this chaos agent, and the first version will be focused on verification and analysis of error-handling in the JVM.

About Chaos Engineering and Antifragile Software

If you are not familiar with chaos engineering, we provide introductory materials about this technique at the end of this article. Chaos engineering is the practice of experimenting on a distributed system in order to build confidence in the system’s capability to withstand unexpected conditions in production. As for antifragility, it's the antonym of "fragility". Traditional means to combat fragility include: fault prevention, fault tolerance, fault removal, and fault forecasting. However, the contributions of those techniques are insufficient; we propose another perspective on system errors. If we can build mechanisms to let the system experience errors and use those to learn from the failures in a controlled environment, we can build confidence in our system's resilience. The goal of chaos engineering and antifragile design is to perform these perturbations and learn from the experience

A Chaos Engineering System for Live Analysis and Falsification of Exception-handling in the JVM

- Read the full article -

Part I Intruduction

Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.
-- Principles of Chaos

Using Chaos Engineering may be as simple as manually running kill -9 on a box inside of your staging environment to simulate failure of a service. Or, it can be as sophisticated as automatically designing and carrying out experiments in a production enviroment against a small but statistically significant fraction of live traffic.

The History of Chaos Engineering at Netflix: started in 2008

  • Chaos Monkey: ball rolling, gaining notoriety for turning off services in the production environment
  • Chaos Kong: transferred those benefits from the small scale to the very large
  • Failure Injection Testing (FIT): the foundation for tackling the space in between

- Read the full article -

Short Introduction to This Paper

This paper describes how to adapt and implement a research prototype called lineage-driven fault injection (LDFI, another paper view see here) to automate failure testing at Netflix. Along the way, the authors describe the challenges that arose adapting the LDFI model to the complex and dynamic realities of the Netflix architecture. They show how they implemented the adapted algorithm as a service atop the existing tracing and fault injection infrastructure, and present preliminary results.

Highlights of This Paper

  • The way of how they adapted LDFI to automate failure testing at Netflix is worth learning, including defining request classes, learning mappings, replay mechanisms.

Key Infomation

  • More explanations about LDFI: It begins with a correct outcome, and asks why the system produced it. This recursive process of asking why questions yields a lineage graph that characterizes all of the computations and data that contributed to the outcome. By doing so, it reveals the system’s implicit redundancy by capturing the various alternative computations that could produce the good result.
  • About the solutions in the SAT problem: It does not necessarily indicate a bug, but rather a hypothesis that must be tested via fault injection
  • Implementation

    • Training: the service collects production traces from the tracing infrastructure and uses them to build a classifier that determines, given a user request, to which request class it belongs. We found that the most predictive features include service URI, device type, and a variety of query string parameters including parameters passed to the Falcor data platform
    • Model enrichment: the service also uses production traces generated by experiments (fault injection exercises that test prior hypotheses produced by LDFI) to update its internal model of alternatives. Intuitively, if an experiment failed to produce a user-visible error, then the call graph generated by that execution is evidence of an alternative computation not yet represented in the model, so it must be added. Doing so will effectively prune the space of future hypotheses
    • Experiments: finally, the service occasionally "installs" a new set of experiments on Zuul. This requires providing Zuul with an up-to-date classifier and the current set of failure hypotheses for each active request class. Zuul will then (for a small fraction of user requests) consult the classifier and appropriately decorate user requests with fault metadata

Relevant Future Works

  • They used the single-label classifier for the first release of the LDFI service, but are continuing to investigate the multi-label formulation

URL

Automating Failure Testing Research at Internet Scale