Back after a long hiatus. The hiatus was not because I wasn't doing lots of interesting things it was because I hadn't figured out how to balance all the interesting things I was doing. Which brings me to the 1st tool that I have blogged about in a long time.
I do a lot each day and I have the memory of a sieve. If I don't record something the minute it occurs to me or I promise to do it the odds are very good that I will not remember it at all. My iPad is my prosthetic memory and everything I have has to sync with my calendars and my task lists so that what I need is accessible everywhere.
For years now I've been using ToodleDo. I have generally been happy with it. It's biggest strength was that it had tasks, outlines, and notes. And the tasks portion of it could handle tasks within tasks in a limited way.
And since I am a Java programmer the fact that they had a REST API was useful. The API documentation was a pain in the posterior because the documentation, like many REST APIs' documentation, didn't quite come from nothing. But once I figured out how to get programmatic access to my task lists, it made many things relatively smooth.
The ToodleDo web application is solid and I used Pocket Informant to sync up with TooldeDo (including the notes) and that was effective and reliable.
The 2 downsides to ToodleDo that have been gnawing at me:
1. There is only one level of tasks within tasks. So I tended to use the top level tasks as projects and the tasks below that as the actions. But that didn't really cleanly sync up with Pocket Informant.
2. Capturing information or actions or tasks was not fast. It was acceptable but it wasn't fast. And once a single task was entered in the iPad application, I had to go through the same sequence again to enter the next task. So for inserting many tasks I end up using imports via CSV. Very usable from the computer, less so from the iPad.
And things probably would've stayed that way except for one other thing: I lead courses that require generating and tracking many activities over 3 1/2 to 5 months at a time. So I have checklists for pairing and executing the course that I revise based on what worked and didn't work in the last course.
Some of the checklists are required for predictable events that occur in a certain date such as a particular classroom. Some of the checklists are required for repeating events such as when a new coach comes on the team or if someone drops out of the course.
I've led 12 of these courses since 2008 and this time my mind balked at again hand editing all of the elements I needed to import into ToodleDo so I ended up reevaluating a number of the online task managers I had reviewed years before as well as a bunch of new ones.
Enter Todoist.
Todoist has the concept of projects and within projects you can have tasks that have up to 4 levels of tasks embedded in them. And, their iPhone and android apps allow for rapid and repeated capture of tasks.
An example:
The left-hand side entry with the red arrow that says "April 2016 SELP" is a project and within that project is a task called "Eval SELP Tech". And within that on the right-hand side is a task called Spreadsheet with a series of tasks below it dealing with Google App Script or GAS.
In the iPad application, if I want to add a task I need only click the blue '+' at the bottom and start entering.
The Todoist application has 2 features that make it very easy to enter tasks in sequence. The 1st is that it will enter any new task at the same level that you entered the last task. The 2nd is that if you enter any time reference it will use that as the due date of the task.
For example:
The blue arrow indicates where you are doing the task entry and the red arrow indicates a portion of the UI you can use your finger on to indent the task or move it out. And once you enter the task if you use a term such as tomorrow or next week it will highlight that and use that as the due date for the task as below:
Lastly, both the iPad and web apps have decent performance. I have over 3000 tasks and actions in this one project that I have not seen any performance degradation.
My next few posts will probably deal with using the Todoist REST API, from Java and from within
Google App Script.
Bigger Wrench
The name of the blog comes from a quote I heard many years ago: "Give me a wrench". "What size?" "It doesn't matter.I am using it as a hammer" This blog is a discussion of software development tools: how to use them, when to use them, how not to use them, when not to use them, the ones I like, the ones I don't like, etc.
Sunday, March 20, 2016
Friday, June 20, 2014
What questions do we want answered.
Practices for Deciding what to Test and what to Automate
We test in order to get certain questions answered. We automate tests when we want to get those questions answered with a minimum of human thought. The idea is that we have the human figure out what questions to ask and how to answer them and let the machine do the grunt work.
Choosing what tests to automate is always the result of a cost-benefit analysis. A.k.a. weighing the benefits against the costs. In most projects you can't test and automate everything, so you want tests that answer the questions you want answered the most. We want to automate the tests that will give us the biggest bang for the buck.
I typically document the tests by the questions the tests are designed to answer. A set of tests may be designed to answer whether or not the current build is worth subjecting to further tests. i.e is the current build minimally acceptable. I have some sample questions below.
As you look at these questions below in each case you want to ask yourself to other questions "Do we care?" and "How much do we care?". Some of these questions are not important to us because of the specific project that we are working on. In almost all cases there is a priority to what questions you want answered first.
Typical Questions
- Do various classes and functions in the code work correctly in isolation. Typically tests that are designed to answer this question fall under the label of unit tests.
- How does a given protocol handle bad state changes. Typically tests that are designed to answer this question referred to as Short Tour Tests.
- Does the SUT (System Under Test) deploy correctly. Typically tests that are designed to answer this fall under the label of smoke tests.
- Does the same code produce the same result when invoked from different platforms.
- Does the current build meet some minimal criteria for being deployed and tested further? Typically tests that are designed to answer this question fall under the label of smoke tests.
- How does the (SUT) perform under specific loads. Tests that answer those questions are typically referred to as performance tests
- Does the SUT past the same test it used to pass? Tests that are designed to answer this question are typically referred to as regression tests.
Once you have decided what questions are important and how important they are, you need to design an automated testing regimen that answers those questions in that order of priority.
I want to emphasize that the questions to be answered and their priorities are key factors in determining the effectiveness and efficiency of the testing process. If you are creating unit tests, integration tests, and performance tests and they don't answer the questions you need to have answered, you are now maintaining unit tests, integration tests, and performance tests that do not contribute to your business. In other words, you are generating overhead and possibly technical debt.
Once you know the questions you want answered and how important they are to you, it's time to figure out what testing types you will use get those answers and what actions your tests will perform.
One key implication of this is that when you're creating individual tests try to keep them to answering one question at a time. As the system evolves answering that one question may become more complicated or involved but evaluating whether or not the test is well designed and executed can always be evaluated by determining whether or not it allows you to answer that one question.
General rules to keep in mind
Test Early/Test Often
The earlier in the development/testing lifecycle that you get the answer to the question, the less expensive it is to respond to.
General rules to keep in mind
Test Early/Test Often
The earlier in the development/testing lifecycle that you get the answer to the question, the less expensive it is to respond to.
Don't buck the tide
Using tools in the way they are designed to be used is far more effective than cramming the tool to fit.
For example: Maven is an automated build tool that is designed around some methodologies that they consider best practices. If you are not taking the actions that Maven expects you to take you can end up spending a great deal of time working on or against Maven rather than working on your tests. By the same token, if you do adhere to what Maven expects, things will be fairly straightforward.
Don't improve the process until you know what you are improving
You can't improve the process unless you have numbers!
You are adding test an existing process it makes sense to test the areas that have the most bugs are the least stable. If you don't have coverage analysis of the code you can't say how much of the code is actually exercised by the testing process. If you don't know what the current performance level of the product is you have no starting point for figuring out where to speed it up.
By the same token, metrics are at best a guideline, if you use a metric for what it's not designed for, it can be dangerous to your sanity.
Context is Decisive
“A best practice is always a best practice; Except when it's not.”
I have recently been involved in a contract that has forced me to really really look at testing practices and how I explain testing practices.
This particular department of a very large company gives lip service and action to a lot of "Best Practices" in the world of development and testing. SCRUMs, TDD, BDD, etc. are all the words you see floating around in the air there. They are saying all the right things, and even taking many good actions, and the end result is almost worse than if they took no actions at all.
It is, in my world the best illustration of a very basic principle, context is decisive. Another way of saying that is that a best practice is always a best practice, except when it's not.
To illustrate the point:
In the context of body part the above is a Finger
In the context of number the above is One
Same picture, different interpretation.
This is particularly notable when you start talking about using "Best Practices" out of context.
When I was starting out as a software engineer I worked for an Foliage Software Systems. They had a significantly higher than average success rate with projects and a higher than average satisfaction level of their customers. One of the things that Foliage did that made a difference was that they had a playbook of processes. So they would have a preliminary engagement with the customer where the lead on the project would discover what kind of existing coding and communication processes they already had in place and use the playbook to select the Foliage development and communication processes to fit with what the customer was doing. This came about because they discovered early on that merely producing a technically successful project was not enough to produce a satisfied client. If the development process was not understandable by or compatible with the client they could have an unsatisfied client. The communication process was not understandable by or compatible with the client, they could have an unsatisfied client. And they worked on FAA and FDA certified projects as well as many, less stringent, markets.
By choosing the process to match the context they ended up with a higher number of successful projects.
As far as I can tell the majority of the TDD is Dead discussion comes down to, if you use the TDD methodology out of context, it gets messy and doesn't deliver on what you need. I completely agree.
I use TDD a lot. It allows me to shake out the design of the individual modules at a low level and alerts me to the impact of re-factoring. I don't test getters and setters. I don't test methods that just delegate to another method. I don't write code for functionality that isn't tested. And I don't forget that the unit test provides me with a minimal contract regarding the usability of the module.
For me the context of TDD is to answer some key questions such as "Does this module meet some minimum robustness requirement?", "Do I know how I'm going to use this module?" , "Did I break anything when I re-factored the classes?". So I use TDD to drive the design and implementation of the individual modules and leave me with a test that documents some ways to use the module, specifies the core functionality of the module, and acts as a tripwire for detecting the impact of re-factoring. In that context, TDD is quite successful.
When I was doing research I came across a website describing the school of thought called Context Driven Testing. You may or may not agree with all of the discussions on the webpage. But the heart of it in the basic principles is one of the clearest statements around best practices that I've ever see.
The Basic Principles of Context-Driven Testing
- The value of any practice depends on its context.
- There are good practices in context, but there are no best practices.
- People, working together, are the most important part of any project’s context.
- Projects unfold over time in ways that are often not predictable.
- The product is a solution. If the problem isn’t solved, the product doesn’t work.
- Good software testing is a challenging intellectual process.
- Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products
In the next entry I will discuss what questions to ask to determine what to test, and what to automate.
Thursday, February 13, 2014
Blogs to watch for useful TestNG info
As I come across the I will update this post with blogs that have some useful info on working with TestNG
http://rationaleemotions.wordpress.com
Monday, February 3, 2014
TestNG : Using Guice for dependency injection
In the previous post I discussed how to use annotations to make it possible for TestNG to rename tests.
The next few posts are going to deal with some useful ways to use dependency injection and TestNG to radically simplify integration testing.
Recently I have been working on a contract that requires a lot of integration testing. And the great challenge of integration testing is that tests can fail because the you are testing against can change (as well as the code). My current client has a framework of VMs that can be deployed in many different configurations with different elements being simulated and some being real.
In addition, a deployed node may have one purpose but, depending on the customer being served the node may use a different protocol. Plus we have some environments where a node may be a single node in one deployment but a cluster in another. Most of this information is available in the database associated with the deployment.
It is my belief that you never need more than two pieces of information to run an integration test. The first is the name of the test set to be run, the second is the name or address of some machine to act as a point source of information (database server or some configuration server such as Zookeeper).
My goals in this current project are to:
- Minimize the amount of information the person writing the test needs to know in order to write the tests. If a session needs to be created, they shouldn't need to know what the underlying protocols are.
- Replace the many diverse configurations files (YAML, Properties and XML) with an IP address and the name of a test set to run.
- Allow the test writers to write tests without regard for whether or not they are pointing at a cluster of nodes or a single node in the deployed environment.
For the first the obvious pattern is to hide the implementations behind interfaces.
For the second the configuration information should be made available to the implementations without passing through the test writers hands.
This all argues for Dependency Injection.
Since we are using TestNG as our test framework, it is natural to use Google Guice as our DI framework. TestNG has an annotation that allows us to specify the DI binding factory to use when the test starts up.
For this example I'm going to use a simple interface for persistent storage called Repo. Repos are designed to store CoolObjs. Note the use of the Google Guava Optional class. This simply gives you a typesafe way of handling when an object is not returned.
Repo.java
CoolObj.java
Note the use of Lombok annotations to simplify POJO development. I mentioned Lombok in a previous post. You can generate getters/setters and constructors by hand.
When testing we are going to use three different implementations of the Repo interface. The first one is the RepoMockImpl use for unit testing. It uses an internal map in memory to store and retrieve the objects. The second one is the LocalRepoImpl that is used for integration testing. It stores all of the objects using a key-value store on disk. The third implementation is the ServerRepoImpl that points to a remote repository server.
The details of each of the implementations of the repository are not important. The key thing is they require different types of configuration information. The ServerRepoImpl requires the IP address/host name of the machine being pointed to.
The test that uses this looks like:
TestUsingInjectedServices.java
The elements that hook everything together are the
If this test were run with the group configured as "unit" we would get the unit test bindings. If we run this test with the group configured as "integration" we would get the integration test bindings.
Configuration information comes in through the TestNG.xml file as TestNG parameters and as TestNG groups .
The Guice factory is in a form specified by TestNG.
TestDIFactory.java
For completeness: GeneralModule.java
What Happens
The tests are started up using the TestNG.XML file. When the TestNG framework instantiates the test it does so after using the TestDIFactory createModule call to determine which Guice Module it will use to configure Guice. In the createModule call we look at the ITestContext and determine what group this test class should be run has. That module is then instantiated and handed back to Guice. Guice then walks through the instantiated classes and when it encounters a @Inject annotation it asks the module for what class or object should be injected into that field. The person writing the tests does not need to know what the Repo service is pointing to.
One of the key gotchas here is that the TestDIFactory and Modules are instantiated on a per test class basis. If we are running five test classes in a unit test group all five classes will have a separate instantiation process. If you wish for a service to be shared as a singleton then you will want to use a static element in the TestDIFactory or the individual module and then bind the interface to the implementation using the "toInstance" method rather than the "to" method.
For this example I'm going to use a simple interface for persistent storage called Repo. Repos are designed to store CoolObjs. Note the use of the Google Guava Optional class. This simply gives you a typesafe way of handling when an object is not returned.
Repo.java
package org.saltations.testng.usingguice; import com.google.common.base.Optional; /** * Repository Interface. Represents a repository that objects can be saved or * recovered from... */ public interface Repo { /** * Store an object */ void store(CoolObj obj); /** * Retrieves an existing object * * @param id The id of the {@link CoolObj} * * @return An instantiated cool object. */ Optional<CoolObj> retrieve(String id); }
CoolObj.java
Note the use of Lombok annotations to simplify POJO development. I mentioned Lombok in a previous post. You can generate getters/setters and constructors by hand.
package org.saltations.testng.usingguice; import lombok.AllArgsConstructor; import lombok.Data; @Data @AllArgsConstructor public class CoolObj { private String id; private String stuff; }
When testing we are going to use three different implementations of the Repo interface. The first one is the RepoMockImpl use for unit testing. It uses an internal map in memory to store and retrieve the objects. The second one is the LocalRepoImpl that is used for integration testing. It stores all of the objects using a key-value store on disk. The third implementation is the ServerRepoImpl that points to a remote repository server.
The details of each of the implementations of the repository are not important. The key thing is they require different types of configuration information. The ServerRepoImpl requires the IP address/host name of the machine being pointed to.
The test that uses this looks like:
TestUsingInjectedServices.java
package org.saltations.testng.usingguice; import static org.testng.Assert.assertEquals; import static org.testng.Assert.assertTrue; import org.testng.annotations.Guice; import org.testng.annotations.Test; import com.google.common.base.Optional; import com.google.inject.Inject;
@Test @Guice(moduleFactory=TestDIFactory.class) public class TestUsingInjectedServices { @Inject private Repo repo; public void shouldSaveAndRetrieveACoolObj() { CoolObj coolObj= new CoolObj("goodObject", "Cool Tidbits"); repo.store(coolObj); OptionalpotentialObj = repo.retrieve(coolObj.getId()); assertTrue(potentialObj.isPresent()); assertEquals(potentialObj.get(), coolObj); } }
The elements that hook everything together are the
- Guice annotation which tells the TestNG framework to use the TestDIFactory class as the factory used to supply Guice with the Modules that in turn provide the bindings between the Repo service and its implementation.
- Inject annotation that Guice to inject the appropriately configured service implementation for the Repo interface in the field repo.
- The groups attribute in the TestNG.xml file that is used by the TestDIFactory to determine which Module implementation is used to supply the bindings.
If this test were run with the group configured as "unit" we would get the unit test bindings. If we run this test with the group configured as "integration" we would get the integration test bindings.
Configuration information comes in through the TestNG.xml file as TestNG parameters and as TestNG groups .
The Guice factory is in a form specified by TestNG.
TestDIFactory.java
package org.saltations.testng.usingguice; import static java.text.MessageFormat.format; import java.net.InetAddress; import java.net.UnknownHostException; import java.util.List; import lombok.NoArgsConstructor; import org.testng.IModuleFactory; import org.testng.ITestContext; import com.google.common.collect.Lists; import com.google.inject.Binder; import com.google.inject.Module; /** * Guice Factory for TestNG Tests. This factory gives back a Guice module that * supplies the bindings that are appropriate for the kind of tests being done. * i.e. the factory will give back a module that supplies unit test service * implementations for unit tests, integration test service implementations for * integration tests, etc... * * @author jmochel */ @NoArgsConstructor public class TestDIFactory implements IModuleFactory { /** * Key for the test parameter in TestNG.xml that contains the name of our repo server. */ private static final String REPO_SERVER_NAME = "repo-server-name"; /** * Module that provides unit test service implementations for unit tests */ public class UnitTestServicesProvider extends GeneralModule implements Module { public UnitTestServicesProvider(ITestContext ctx, Class clazz) { super(ctx, clazz); } public void configure(Binder binder) { binder.bind(Repo.class).to(RepoMockImpl.class); } } /** * Module that supplies integration test service implementations for * integration tests */ private class IntegrationServicesProvider extends GeneralModule implements Module { public IntegrationServicesProvider(ITestContext ctx, Class clazz) { super(ctx, clazz); } public void configure(Binder binder) { binder.bind(Repo.class).to(LocalRepoImpl.class); } } /** * Module that supplies real word service implementations for * testing the application against a real world system. */ private class WorkingServicesProvider extends GeneralModule implements Module { /** * Repo Server address. */ private InetAddress address; public WorkingServicesProvider(ITestContext ctx, Class clazz) { super(ctx, clazz); /* * Confirm that the repository server address exists and that it points somewhere real. */ String repoServerName = ctx.getCurrentXmlTest().getAllParameters().get(REPO_SERVER_NAME); if (repoServerName == null || repoServerName.isEmpty() ) { throw new IllegalArgumentException(format("Unable to find {0} in the test parameters. We expected to find it configured in the TestNG.xml parameters.", REPO_SERVER_NAME)); } try { address = InetAddress.getByName(repoServerName); } catch (UnknownHostException e) { throw new IllegalArgumentException(format("Unable to find host {1} specified by parameter{0} in the test parameters.", REPO_SERVER_NAME, repoServerName)); } } public void configure(Binder binder) { binder.bind(Repo.class).toInstance(new ServerRepoImpl(address)); } } /* * @see org.testng.IModuleFactory#createModule(org.testng.ITestContext, * java.lang.Class) */ public Module createModule(ITestContext ctx, Class clazz) { /* * Get a list of included groups (comes from the TestNG.xml) and choose * which Guice module to return based on the types of tests being done. */ Listgroups = Lists.newArrayList(ctx.getIncludedGroups()); Module module = null; if (groups.contains("unit")) { module = new UnitTestServicesProvider(ctx, clazz); } else if (groups.contains("integration")) { module = new IntegrationServicesProvider(ctx, clazz); } else { module = new WorkingServicesProvider(ctx, clazz); } return module; } }
For completeness: GeneralModule.java
package org.saltations.testng.usingguice; import static com.google.common.base.Preconditions.checkNotNull; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NonNull; import org.testng.ITestContext; import com.google.inject.Binder; import com.google.inject.Module; /** * Parent class for Guice Modules used in TestNG. Contains the ITestContext and * Test Class because I so often need the tests to make decisions based on the * contents of the TesNG.xml (and made available in the ITestContext current XML * parameters). */ @Data @AllArgsConstructor public abstract class GeneralModule implements Module { /** * Context for the test */ @NonNull private ITestContext ctx; /** * Class to be tested */ @NonNull private Class clazz; public abstract void configure(Binder binder); }
What Happens
The tests are started up using the TestNG.XML file. When the TestNG framework instantiates the test it does so after using the TestDIFactory createModule call to determine which Guice Module it will use to configure Guice. In the createModule call we look at the ITestContext and determine what group this test class should be run has. That module is then instantiated and handed back to Guice. Guice then walks through the instantiated classes and when it encounters a @Inject annotation it asks the module for what class or object should be injected into that field. The person writing the tests does not need to know what the Repo service is pointing to.
One of the key gotchas here is that the TestDIFactory and Modules are instantiated on a per test class basis. If we are running five test classes in a unit test group all five classes will have a separate instantiation process. If you wish for a service to be shared as a singleton then you will want to use a static element in the TestDIFactory or the individual module and then bind the interface to the implementation using the "toInstance" method rather than the "to" method.
What if Repo involved a Generic?
If Repo was actually Repo<ObjectType> the the code would look like:
binder.bind(new TypeLiteral<Repo<ObjectClass>>(){}).to(new TypeLiteral<RepoImpl<ObjectClass>>(){});
Pros and Cons
The pros are pretty clear to me. By using this I am able to hide all of the configuration details from the test writers. It is a little bit more work up front but it pays off handsomely in the simplicity of the testing code. From my standpoint one of the big pluses is that it allows me to verify all the configurations upfront and instantiate the classes with the configuration embedded in them situated that configuration can be used for the test services and (as we see in a later post) for configuring the tests themselves.
If Repo was actually Repo<ObjectType> the the code would look like:
binder.bind(new TypeLiteral<Repo<ObjectClass>>(){}).to(new TypeLiteral<RepoImpl<ObjectClass>>(){});
Pros and Cons
The pros are pretty clear to me. By using this I am able to hide all of the configuration details from the test writers. It is a little bit more work up front but it pays off handsomely in the simplicity of the testing code. From my standpoint one of the big pluses is that it allows me to verify all the configurations upfront and instantiate the classes with the configuration embedded in them situated that configuration can be used for the test services and (as we see in a later post) for configuring the tests themselves.
Labels:
Dependency Injection,
DI,
Guava,
Guice,
Integration Tests,
TestNG
Sunday, February 2, 2014
TestNG: Dynamically Naming Tests from data provider parameters
I have recently been involved in several contracts that require me to stretch my knowledge of TestNG. There a lot of cool things you can do with TestNG that are available but not explicitly documented (you know, with real source code !).
One of the first examples is that of dynamically naming a test based on incoming data.
If you're using a data provider in TestNG it allows you to define a method that will feed the data into a test method. For example this simple data provider gives three sets of data that a test can be run with:
The above method will be passed all three sets of data in sequence with the first argument going into the first string of the method in the second argument going into the second string of the method. When you run the test in eclipse the console output will show
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")
But you may want the scenario name to show up in the HTML report or in some other way.
Through the magic of some TestNG interfaces we can make it so that those arguments are intercepted and used to name the instance of the test before the test method is run.
To do so we define a custom annotation UseAsTestName that is made available at run time.
The annotation has an attribute that indicates which parameter of the parameter set should be used as the test name.
Then we have code in the test's parent class that runs before each method is run (as indicated by the @BeforeMethod annotation).
Because we have a Method type in the methods parameter list, TestNG automatically inserts the method object pertaining to the method being called. The same for the Object[] which TestNG automatically inserts the row of data associated with this invocation of the test method.
@BeforeMethod(alwaysRun = true)
public void extractTestNameFromParameters(Method method, Object[] parameters) {
In the before method we clear out the old test name and use the information provided by the method parameter and the parameters parameter to create a new test name and set that. When the XML and eclipse Console reports are being generated by the tests they use the ITest getTestName() method to get the name of the test.
The tests that use this look like:
The output looks something like:
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")
PASSED: SCENARIO_1("SCENARIO_1", "First Test Scenario")
PASSED: SCENARIO_2("SCENARIO_2", "Second Test Scenario")
PASSED: SCENARIO_3("SCENARIO_3", "Third Test Scenario")
PASSED: First Test Scenario("SCENARIO_1", "First Test Scenario")
PASSED: Second Test Scenario("SCENARIO_2", "Second Test Scenario")
PASSED: Third Test Scenario("SCENARIO_3", "Third Test Scenario")
The one gotcha in all of this is that the reports that are generated for the results are generated from the XML that TestNG generates.
The XML uses the name we generate and puts it in what's called the "instance name" attribute. There are many different HTML versions of the reports and some of them correctly use the instance name and some of them don't. The HTML reports run on Jenkins correctly use the instance names and show the tests with that as their name. The default HTML reports that get generated will only run it from eclipse don't use the instance name correctly. The eclipse console does correctly use the instance name thus we can see it there.
Some HTML reports will show it in some will not.
One of the first examples is that of dynamically naming a test based on incoming data.
If you're using a data provider in TestNG it allows you to define a method that will feed the data into a test method. For example this simple data provider gives three sets of data that a test can be run with:
@DataProvider(name="rawDP") public Object[][] sampleDataProvider() { Object[][] rawData = { {"SCENARIO_1","First Test Scenario"}, {"SCENARIO_2","Second Test Scenario"}, {"SCENARIO_3","Third Test Scenario"} }; return rawData; } @Test(dataProvider="rawDP") public void shouldHaveTestNamesBasedOnMethodName(String arg1, String arg2) { }
The above method will be passed all three sets of data in sequence with the first argument going into the first string of the method in the second argument going into the second string of the method. When you run the test in eclipse the console output will show
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")
But you may want the scenario name to show up in the HTML report or in some other way.
Through the magic of some TestNG interfaces we can make it so that those arguments are intercepted and used to name the instance of the test before the test method is run.
To do so we define a custom annotation UseAsTestName that is made available at run time.
The annotation has an attribute that indicates which parameter of the parameter set should be used as the test name.
/** * Annotation used as an indicator that the Test Method should use the indexed * parameter as the test instance name * * @author jmochel */ @Retention(RetentionPolicy.RUNTIME) public @interface UseAsTestName { /** * Index of the parameter to use as the Test Case ID. */ int idx() default 0; }
Then we have code in the test's parent class that runs before each method is run (as indicated by the @BeforeMethod annotation).
public class UseAsTestName_TestBase implements ITest { /** * Name of the current test. Used to implement {@link ITest#getTestName()} */ private String testInstanceName = ""; /** * Allows us to set the current test name internally to this class so that * the TestNG framework can use the {@link ITest} implementation for naming * tests. * * @param testName */ private void setTestName(String anInstanceName) { testInstanceName = anInstanceName; } /** * See {@link ITest#getTestName()} */ public String getTestName() { return testInstanceName; } /** * Method to transform the name of tests when they are called with the * testname as one of the parameters. Only takes effect if method has * {@link UseAsTestName} annotation on it.. * * @param method * The method being called. * * @param parameterBlob * The set of test data being passed to that method. */ @BeforeMethod(alwaysRun = true) public void extractTestNameFromParameters(Method method, Object[] parameters) { /* * Verify Parameters */ checkNotNull(method); checkNotNull(parameters); /* * Empty out the name from the previous test */ setTestName(method.getName()); /* * If there is a UseAsTestCaseID annotation on the method, use it to get * a new test name */ UseAsTestName useAsTestName = method .getAnnotation(UseAsTestName.class); if (useAsTestName != null) { /* * Check that the index it uses is viable. */ if (useAsTestName.idx() > parameters.length - 1) { throw new IllegalArgumentException( format("We have been asked to use an incorrect parameter as a Test Case ID. The {0} annotation on method {1} is asking us to use the parameter at index {2} in the array and there are only {3} parameters in the array.", UseAsTestName.class.getSimpleName(), method.getName(), useAsTestName.idx(), parameters.length)); } /* * Is the parameter it points to assignable as a string. */ Object parmAsObj = parameters[useAsTestName.idx()]; if (!String.class.isAssignableFrom(parmAsObj.getClass())) { throw new IllegalArgumentException( format("We have been asked to use a parameter of an incorrect type as a Test Case Name. The {0} annotation on method {1} is asking us to use the parameter at index {2} in the array that parameter is not usable as a string. It is of type {3}", UseAsTestName.class.getSimpleName(), method.getName(), useAsTestName.idx(), parmAsObj.getClass().getSimpleName())); } /* * Get the parameter at the specified index and use it. */ String testCaseId = (String) parameters[useAsTestName.idx()]; setTestName(testCaseId); } } }
Because we have a Method type in the methods parameter list, TestNG automatically inserts the method object pertaining to the method being called. The same for the Object[] which TestNG automatically inserts the row of data associated with this invocation of the test method.
@BeforeMethod(alwaysRun = true)
public void extractTestNameFromParameters(Method method, Object[] parameters) {
The tests that use this look like:
public class UseAsTestNameTest extends TestBase { @DataProvider(name="rawDP") public Object[][] sampleDataProvider() { Object[][] rawData = { {"SCENARIO_1","First Test Scenario"}, {"SCENARIO_2","Second Test Scenario"}, {"SCENARIO_3","Third Test Scenario"} }; return rawData; } @Test(dataProvider="rawDP") public void shouldHaveTestNamesBasedOnMethodName(String arg1, String arg2) { } @UseAsTestName() @Test(dataProvider="rawDP") public void shouldHaveTestNamesStartingWithANA(String arg1, String arg2) { getTestName().equals(arg1); } @UseAsTestName(idx=1) @Test(dataProvider="rawDP") public void shouldHaveTestNamesStartingWithThe(String arg1, String arg2) { getTestName().equals(arg2); } }
The output looks something like:
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")
PASSED: SCENARIO_1("SCENARIO_1", "First Test Scenario")
PASSED: SCENARIO_2("SCENARIO_2", "Second Test Scenario")
PASSED: SCENARIO_3("SCENARIO_3", "Third Test Scenario")
PASSED: First Test Scenario("SCENARIO_1", "First Test Scenario")
PASSED: Second Test Scenario("SCENARIO_2", "Second Test Scenario")
PASSED: Third Test Scenario("SCENARIO_3", "Third Test Scenario")
The one gotcha in all of this is that the reports that are generated for the results are generated from the XML that TestNG generates.
The XML uses the name we generate and puts it in what's called the "instance name" attribute. There are many different HTML versions of the reports and some of them correctly use the instance name and some of them don't. The HTML reports run on Jenkins correctly use the instance names and show the tests with that as their name. The default HTML reports that get generated will only run it from eclipse don't use the instance name correctly. The eclipse console does correctly use the instance name thus we can see it there.
Some HTML reports will show it in some will not.
Tuesday, August 20, 2013
Getting things out of your head
Most task management methodologies share some way to deal with getting information out of someone's head and into a persistent form as quickly as possible without evaluating it. Some methodologies use some notebook or tool that you carry around everywhere and you just write things down in them, some others simply have you write things on a whole sheet of paper and throw it into it inbox. These actions are intended to relieve you of the mental burden of keeping track of stuff. When you write something down and you know you can find it again later when you need it, it stops filling the space in the back of your brain. A great deal of research has shown that the more thought that has to go into categorizing or figuring out where something has to go at the moment when you need to capture it, the more likely it is that it will be lost (due to interruptions, indecision, etc.)
Unfortunately, most PIM's require you to know exactly what you going to do with the data before you ever capture it. And when you capture it, if you need to change it into something else, it can be a real pain that requires cut-and-paste.
The scenarios below deal with capturing information as well as evaluating and transforming it later.
Scenario: A parent is assuming out the door when their son reminds them that he needs to be at the newly scheduled soccer practice on Saturday. When the parent jumps in the car they open the PIM application on their cell phone and simply speak "Trent has a soccer practice next Saturday at 4 PM". The PIM generates a note and automatically puts it into a list to be triaged.
That evening when the parent opens the PIM on their desktop or tablet it notes that there are several items to be triaged. When the parent looks at the note they use a finger gesture or mouse gesture to transform it to an event. The event is generated with the subject "Trent has a soccer practice", and a date of next Saturday at a time of 4 PM.
Scenario: An entrepreneur is on the road and uses a digital voice recorder to record a bunch of brilliant ideas that they absolutely don't want to forget. When they come to rest they open up their laptop, open the PIM and import the various sound files from the voice recorder. A note gets created for each sound
file with the sound file attached. They automatically go into the triage queue and the entrepreneur can use voice recognition to extract the text or simply use the attached voice file as is.
Scenario: A vice president of a small company wants her engineering team to stay on top of the latest development technologies and subscribes to the newsfeeds of several training and development companies. She has set things up so that whenever something new is available it automatically gets put into her morning to do list as a task that says "Evaluate: New and Cool Course" where "New and Cool Course" was the subject of the newsfeed entry.
The VP evaluates a bunch of new courses and decides that 3 of them look interesting and then uses a mouse gesture or a click to transform all three of those into "Shared Votes" that are visible to the engineering team. The engineering team members each give a thumbs up or thumbs down to the particular course and at the end of the "New and Cool Course" has three thumbs up.
The VP then uses a mouse gesture or a click to turn the "Shared Vote" into an task "Schedule New and Cool Course for the engineering team".
Scenario: A security consultant has been asked to step in and have a conversation with a general consultant's client about their security needs. He enters this into the PIM as a note "Call so-and-so about security for whats-his- name". That night he tells the PIM that this note is now both an event (which is
not yet scheduled) AND a promise. These are not two separate entities. The single item is both an event and a promise. The event does not yet have a hard and fast date so it needs to be scheduled. And when it is complete, and the consultant notes that is complete, and email goes out to the general consultant
to let them know that that's complete.
And the security consultant can even share the event with the general consultant so that the general consultant can see when the event has actually been scheduled.
Subscribe to:
Posts (Atom)