Practices for Deciding what to Test and what to Automate
We test in order to get certain questions answered. We automate tests when we want to get those questions answered with a minimum of human thought. The idea is that we have the human figure out what questions to ask and how to answer them and let the machine do the grunt work.
Choosing what tests to automate is always the result of a cost-benefit analysis. A.k.a. weighing the benefits against the costs. In most projects you can't test and automate everything, so you want tests that answer the questions you want answered the most. We want to automate the tests that will give us the biggest bang for the buck.
I typically document the tests by the questions the tests are designed to answer. A set of tests may be designed to answer whether or not the current build is worth subjecting to further tests. i.e is the current build minimally acceptable. I have some sample questions below.
As you look at these questions below in each case you want to ask yourself to other questions "Do we care?" and "How much do we care?". Some of these questions are not important to us because of the specific project that we are working on. In almost all cases there is a priority to what questions you want answered first.
Typical Questions
- Do various classes and functions in the code work correctly in isolation. Typically tests that are designed to answer this question fall under the label of unit tests.
- How does a given protocol handle bad state changes. Typically tests that are designed to answer this question referred to as Short Tour Tests.
- Does the SUT (System Under Test) deploy correctly. Typically tests that are designed to answer this fall under the label of smoke tests.
- Does the same code produce the same result when invoked from different platforms.
- Does the current build meet some minimal criteria for being deployed and tested further? Typically tests that are designed to answer this question fall under the label of smoke tests.
- How does the (SUT) perform under specific loads. Tests that answer those questions are typically referred to as performance tests
- Does the SUT past the same test it used to pass? Tests that are designed to answer this question are typically referred to as regression tests.
Once you have decided what questions are important and how important they are, you need to design an automated testing regimen that answers those questions in that order of priority.
I want to emphasize that the questions to be answered and their priorities are key factors in determining the effectiveness and efficiency of the testing process. If you are creating unit tests, integration tests, and performance tests and they don't answer the questions you need to have answered, you are now maintaining unit tests, integration tests, and performance tests that do not contribute to your business. In other words, you are generating overhead and possibly technical debt.
Once you know the questions you want answered and how important they are to you, it's time to figure out what testing types you will use get those answers and what actions your tests will perform.
One key implication of this is that when you're creating individual tests try to keep them to answering one question at a time. As the system evolves answering that one question may become more complicated or involved but evaluating whether or not the test is well designed and executed can always be evaluated by determining whether or not it allows you to answer that one question.
General rules to keep in mind
Test Early/Test Often
The earlier in the development/testing lifecycle that you get the answer to the question, the less expensive it is to respond to.
General rules to keep in mind
Test Early/Test Often
The earlier in the development/testing lifecycle that you get the answer to the question, the less expensive it is to respond to.
Don't buck the tide
Using tools in the way they are designed to be used is far more effective than cramming the tool to fit.
For example: Maven is an automated build tool that is designed around some methodologies that they consider best practices. If you are not taking the actions that Maven expects you to take you can end up spending a great deal of time working on or against Maven rather than working on your tests. By the same token, if you do adhere to what Maven expects, things will be fairly straightforward.
Don't improve the process until you know what you are improving
You can't improve the process unless you have numbers!
You are adding test an existing process it makes sense to test the areas that have the most bugs are the least stable. If you don't have coverage analysis of the code you can't say how much of the code is actually exercised by the testing process. If you don't know what the current performance level of the product is you have no starting point for figuring out where to speed it up.
By the same token, metrics are at best a guideline, if you use a metric for what it's not designed for, it can be dangerous to your sanity.
No comments:
Post a Comment