Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Draft for Silicon Valley Code Camp

Similar presentations


Presentation on theme: "A Draft for Silicon Valley Code Camp"— Presentation transcript:

1 A Draft for Silicon Valley Code Camp
Testing in Production Shaun Abram A Draft for Silicon Valley Code Camp October 2018

2 Joke… or Good? Non-prod = pale imitation, like mocks, or “it works on my machine” Prod is different; 4th trimester… “Testing in production” You may have seen this meme before: The DosEquis guys saying “I don’t always test, but when I do, I test in production” “Testing in production” has been kind of a joke -> what you’re really saying is you don’t test anywhere. And instead you’re just winging it: deploying to production and <CROSS FINGERS> hoping it all works. But then I began to look at it differently. The DosEquis guy usual says “I don’t always drink beer, but when I do, I drink DosEquis” Meaning DosEquis is the best beer to drink. So, the implication here is not that testing in production is a joke, but that Production is actually the BEST place to test. And I’m increasingly believing that to be the case. Or, at least, it is an environment we shouldn’t be dismissing for testing. After all, prod is the only place your software has an impact on your customers is production. But there has been this status quo of production being sacrosanct. Instead of testing there, it is common to keep a non-prod env, such as staging, as identical to production as possible, and test there. Such environments are usually a pale imitation of production however. Testing in staging is kind of like testing with mocks, an imitation, but not the real thing. Saying “works in staging” is only one step better than “works on my machine”. Production is a different beast! I’ve heard of software being released to production as being like a baby’s 4th trimester. When software leave’s it artificial environments and slams into the real world But what makes the real world of production so special?

3 How is Production different?
Testing in Production How is Production different? Serious question: In what ways in Production different from other environments? Hardware & Cluster size, Data Configuration, Traffic, Monitoring Some things we can only test in production As our architecture becomes more compilated (particularly with Microservices), and as we merge from separate dev and QA teams to one engineering organization, we need to consider all options to allow us to test and deliver working software to our customer. Including testing in production.

4 Treat production validation with the respect it deserves.
Testing in Production IS NOT a replacement for non-prod testing Treat production validation with the respect it deserves. So should we not test in non-prod first? No! Testing in production is by no means a substitute for pre-production testing Most production testing is really validation only – although there is at least one exception (A/B testing) Respect production Beware of unwanted side effects Stateless services are good candidates Think SAFE methods e.g. GET, HEAD Consider testing using expected failures of others e.g. PUT that results in 400 error (still tells you something) Or at least be able to tell the difference between test data and “real” prod data

5 Testing at Release Chaos Engineering Observability
Testing in Production Testing at Release Chaos Engineering Observability Today we’re going to cover some of the different ways we can test in production We’ll start with Observability, the foundation for any testing in production. Observability = Knowing what the heck your app is doing anyway. Going beyond just logs and alerting Around deployment & release times Chaos engineering. Perhaps the most advanced form of production testing, but I would argue its actually not that advance. I talk about what is is, some basic rules for doing and how we’ve been starting to use it

6 Observability Testing in Production
The ability to ask new questions of your system without deploying new code Observability Observability is The ability… Being able to answer questions that you have never thought of before You can think of it as the next step beyond just monitoring and alerting Systems have become more distributed, and in the case of containerization, more ephemeral. It is increasingly difficult to know what our software is doing And Observability means bringing better visibility into systems  To have better visibility, we need to acknowledge that…

7 We need Observability in our systems
Everything is sometimes broken Something is always broken If nothing seems broken... …your monitoring is broken It’s impossible to predict the myriad states of partial failures we’ll see Everything is sometimes broken Something is always broker -> No complex system is ever fully healthy If nothing broken… Distributed systems are unpredictable. In particular, it’s impossible to predict all the ways a system might fail Failure needs to be embraced at every phase (from design to implementation, testing, deployment, and operation) Ease of debugging is of high importance

8 How do we observe our apps?
Observability How do we observe our apps? Logging: Structured logging: plain text -> splunk friendly -> json Eventlog can be a great source of logs for debugging too Consider sampling rather than aggregation Metrics: Time series metrics, like tracking system stats such as CPU and mem usage, stats like # logins Tracing: Distributed traceability using Correlation ID lib; Zipkin etc Alerting: Useful for about proactively learning about, typically, predictable issues Tools: e.g. Splunk, NR, OverOps; Wavefront EPX, TDA (Thread Dump Analyzer) UX OverOps / HoneyComb Stacktraces and exception trackers?

9 Observability Testing in Production OK, so that was Observability
The ability to answer questions about our applications behavior in production. Questions we may have never even though of before.

10 Testing at Release Chaos Engineering Observability
Testing in Production Testing at Release Chaos Engineering Observability And with Observability in place, what types of production testing can we do… Let’s move onto…

11 Testing at Release Testing in Production Testing at Release time
Let’s start by defining some terms: Deployment vs release

12 Deploy Release Post-Release Testing at Release Config Tests
Smoke Tests Shadowing Load Tests Release Canary release Internal release Post-Release Feature Flags A/B Testing Chaos Engineering… Shadowing is the technique by which production traffic to any given service is captured and replayed against the newly deployed version of the service. Shadowing works best for testing idempotent requests or testing stateless services with any stateful backend being stubbed out. Repeat that point that these types of testing, and production testing in general, are really about validation only. Feature flagging is aka Dark Launch

13 Testing in Production Testing at Release

14 Testing at Release Chaos Engineering Observability
Testing in Production Testing at Release Chaos Engineering Observability

15 Testing in Production Chaos Engineering

16 Resilience Engineering
Testing in Production aka Resilience Engineering Carefully planned experiments designed to reveal weaknesses in our systems Chaos Engineering

17 learn and improve resilience
Game Days An exercise where we place systems under stress to learn and improve resilience (And even just getting the team together to discuss resilience can be worthwhile) If Chaos Engineering is the theory, Game days are the practice; the execution Game days are where you start with Chaos engineering Systems can be technology, people, process They are like fire drills – an opportunity to practice a potentially dangerous scenario in a safer environment

18 Chaos Engineering – a step by step guide
Start small! Walk before you can run. Hypothesis (Steady state) Minimize Blast Radius Run Analyze Increase Repeat, Automate Don’t get carried away! We’re easing into testing in prod. Let’s not ruin it for everyone. To start with, what are we trying to test! Pick a hypothesis. Typically in Chaos Engineering experiments, the hypothesis is that if I do X (take out a server, kill a region), everything should be OK But we need to be specific about how to measure things are OK If out hypothesis is “if we fail primary DB, everything should be ok,” Then we need to define what OK is! And a big part of defining OK is to define “Steady State” Steady state is essentially what the key metrics are for you to monitor as part of your test. It could be things like: Loan application remain constant Or response times remain in an acceptable range If you don’t define steady state, how do you know your test is working on not? How do you know if you are breaking things? With a hypothesis in mind, and a way to test, but first think abut blast radius 2. Minimize the blast radius The blast radius refers to to how much damage can be done by the experiment If you take out a server, and everything is in fact NOT OK, how bad might it be Try to ensure that you limit he possible damage For example, if your hypothesis is that When Foo service is running in a pool of 2 servers And one of those servers dies, CPU and memory utilization should increase on the remain servers, but response time remain unaffected That is a fine thing to test But if you have 10 services depending on that service (even in non prod), and your wrong that response times will be unaffected, you may have caused 10 other services to have problems So a way to limit the blast radius in that test would be to test using a pool of Foo Service that only one other service relies on. Hopefully a service that you also control and that is closely monitored as part of the test. Another way to minimize possible damage is to make sure that you have the equivalent of a big red Stop Test button! If you metrics aren’t looking good, have the ability to abort the test immediately. Remember: our goal here is to build confidence in the resilience of the system and we so that with Controlled, contained experiments, that grow in scope incrementally. 3. Run the experiment Figure out the best way to test your hypothesis If you plan to take out a server, how do you do it? ssh in and kill -9? Orderly shutdown? Have Ops do it for you? Do you simulate failure by using bogus IP addresses, or simply removing a server from a VIP pool? And again, stop if metrics or alert dictate 4. Analyze the results Were your expectations correct? Did your system handle things correctly Did you spot issue with you alerts, metrics that should be improved before any future tests 5. Increase scope The idea is to start small 1 service, in non-prod, and gradually expand to prod. And the goal should be prod. Prod is where’s it’s at!

19 Testing in Production Chaos Engineering

20 Testing at Release Chaos Engineering Observability
Testing in Production Testing at Release Chaos Engineering Observability That brings us to the end of the presentation We have talked about Testing in production No longer a joke, instead increasingly viewed as a best practice. It is not a replacement for the essential and high value non-prod testing we do, but instead an addition. OBS: Testing in production, and indeed in all envs, requires being able to understand what our applications do. Conventional logs, monitoring and alerting are all good, but Observability is about more than that. It’s about the ability to answer complex questions about our apps at run time. Questions we may not have even thought of before like: why is my app slow. Is it me or a downstream service? Where is all my memory being used. We can use metrics, tracing, any tools at our disposal so that we can see what’s going on when things go wrong. Or better still, to proactively spot problems in advance. And with Observability in place, we can actually start to test in production! We ran through different types of testing including after deployment (Config, smoke, load, shadow) At release time: Canary and internal release After release: Feature flags and A/B testing Finally, even when everything is up and running in prod, customers are using it, and all looks good, there is still more testing we can do Chaos Engineering Not introducing chaos, but exposing the already present chaos! Carefully planned experiments designed to reveal weaknesses in our systems

21 Reading material Chaos Engineering (free eBook)
Distributed Systems Observability (free eBook)

22 Reading material shaunabram.com Principles of Chaos Engineering
principlesofchaos.org How to run a Game Day Testing in production: Monitoring in the time of Cloud Native Deploy != Release

23 Testing in production – The Industry Experts
Nora Jones Charity Majors Cindy Sridharan Tammy Butow

24 Questions?

25 How should we embrace failure?
If things are always broken, how are we functioning? Avoid failures when you can Gracefully degrade if you can’t Tolerate the unavoidable Overall try to build systems to be debuggable (or “observable”) Avoiding = retrying. Graceful degradation = timeouts, circuit breaking (e.g. hysterix), rate limiting and load shedding. Tolerating failure can include mechanisms such as region failover and eventual consistency (it’s OK if we can’t do it now, we’ll do it eventually) and multi-tiered caching (e.g. if you relying on a data store and its down, can you write to a cache as an interim alternative?). Rate limiting


Download ppt "A Draft for Silicon Valley Code Camp"

Similar presentations


Ads by Google