Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Sunday, February 2, 2014

TestNG: Dynamically Naming Tests from data provider parameters

I have recently been involved in several contracts that require me to stretch my knowledge of TestNG. There a lot of cool things you can do with TestNG that are available but not explicitly documented  (you know, with real source code !).

One of the first examples is that of dynamically naming a test based on incoming data.

If you're using a data provider in TestNG it allows you to define a method that will feed the data into a test method. For example this simple data provider gives three sets of data that a test can be run with:

@DataProvider(name="rawDP")
public Object[][] sampleDataProvider()
{
 Object[][] rawData = {
  
   {"SCENARIO_1","First Test Scenario"},
   {"SCENARIO_2","Second Test Scenario"},
   {"SCENARIO_3","Third Test Scenario"}
 };
 
 return rawData;
}


@Test(dataProvider="rawDP")
public void shouldHaveTestNamesBasedOnMethodName(String arg1, String arg2)
{
}

The above method will be passed all three sets of data in sequence with the first argument going into the first string of the method in the second argument going into the second string of the method. When you run the test in eclipse the console output will show

PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")

But you may want the scenario name to show up in the HTML report or  in some other way.

Through the magic of some TestNG interfaces we can make it so that those arguments are intercepted and used to name the instance of the test before the test method is run.

To do so we define a custom annotation UseAsTestName that is made available at run time.
The annotation has an attribute that indicates which parameter of the parameter set should be used as the test name.

/**
 * Annotation used as an indicator that the Test Method should use the indexed
 * parameter as the test instance name
 *
 * @author jmochel
 */

@Retention(RetentionPolicy.RUNTIME)
public @interface UseAsTestName {

 /**
  * Index of the parameter to use as the Test Case ID.
  */

 int idx() default 0;

}

Then we have code in the test's parent class that runs before each method is run (as indicated by the @BeforeMethod annotation).

public class UseAsTestName_TestBase implements ITest {
 
 /**
  * Name of the current test. Used to implement {@link ITest#getTestName()}
  */

 private String testInstanceName = "";

 /**
  * Allows us to set the current test name internally to this class so that
  * the TestNG framework can use the {@link ITest} implementation for naming
  * tests.
  *
  * @param testName
  */

 private void setTestName(String anInstanceName) {
  testInstanceName = anInstanceName;
 }

 /**
  * See {@link ITest#getTestName()}
  */

 public String getTestName() {
  return testInstanceName;
 }

 /**
  * Method to transform the name of tests when they are called with the
  * testname as one of the parameters. Only takes effect if method has
  * {@link UseAsTestName} annotation on it..
  *
  * @param method
  *            The method being called.
  *
  * @param parameterBlob
  *            The set of test data being passed to that method.
  */

 @BeforeMethod(alwaysRun = true)
 public void extractTestNameFromParameters(Method method, Object[] parameters) {

  /*
   * Verify Parameters
   */

  checkNotNull(method);
  checkNotNull(parameters);

  /*
   * Empty out the name from the previous test
   */

  setTestName(method.getName());

  /*
   * If there is a UseAsTestCaseID annotation on the method, use it to get
   * a new test name
   */

  UseAsTestName useAsTestName = method
    .getAnnotation(UseAsTestName.class);

  if (useAsTestName != null) {
   
   /*
    * Check that the index it uses is viable.
    */

   if (useAsTestName.idx() > parameters.length - 1) {
    throw new IllegalArgumentException(
      format("We have been asked to use an incorrect parameter as a Test Case ID. The {0} annotation on method {1} is asking us to use the parameter at index {2} in the array and there are only {3} parameters in the array.",
        UseAsTestName.class.getSimpleName(),
        method.getName(), useAsTestName.idx(),
        parameters.length));
   }

   /*
    * Is the parameter it points to assignable as a string.
    */

   Object parmAsObj = parameters[useAsTestName.idx()];

   if (!String.class.isAssignableFrom(parmAsObj.getClass())) {
    throw new IllegalArgumentException(
      format("We have been asked to use a parameter of an incorrect type as a Test Case Name. The {0} annotation on method {1} is asking us to use the parameter at index {2} in the array that parameter is not usable as a string. It is of type {3}",
        UseAsTestName.class.getSimpleName(),
        method.getName(), useAsTestName.idx(),
        parmAsObj.getClass().getSimpleName()));
   }

   /*
    * Get the parameter at the specified index and use it.
    */

   String testCaseId = (String) parameters[useAsTestName.idx()];

   setTestName(testCaseId);
  }
 }

}

Because we have a Method type in the methods parameter list, TestNG automatically inserts the method object pertaining to the method being called. The same for the Object[] which TestNG automatically inserts the row of data associated with this invocation of the test method.

@BeforeMethod(alwaysRun = true)
public void extractTestNameFromParameters(Method method, Object[] parameters) {

In the before method we clear out the old test name and use the information provided by the method parameter and the parameters parameter to create a new test name and set that.  When the XML and eclipse Console reports are being generated by the tests they use the ITest getTestName() method to get the name of the test.

The tests that use this look like:

public class UseAsTestNameTest extends TestBase {
 
 @DataProvider(name="rawDP")
 public Object[][] sampleDataProvider()
 {
  Object[][] rawData = {
    {"SCENARIO_1","First Test Scenario"}, 
    {"SCENARIO_2","Second Test Scenario"},
    {"SCENARIO_3","Third Test Scenario"}
  };
  
  return rawData;
 }
 
 
 @Test(dataProvider="rawDP")
 public void shouldHaveTestNamesBasedOnMethodName(String arg1, String arg2)
 {
 }
 
 @UseAsTestName()
 @Test(dataProvider="rawDP")
 public void shouldHaveTestNamesStartingWithANA(String arg1, String arg2)
 {
  getTestName().equals(arg1);
 }
 
 @UseAsTestName(idx=1)
 @Test(dataProvider="rawDP")
 public void shouldHaveTestNamesStartingWithThe(String arg1, String arg2)
 {
  getTestName().equals(arg2);
 } 
}


The output looks something like:

PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_1", "First Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_2", "Second Test Scenario")
PASSED: shouldHaveTestNamesBasedOnMethodName("SCENARIO_3", "Third Test Scenario")
PASSED: SCENARIO_1("SCENARIO_1", "First Test Scenario")
PASSED: SCENARIO_2("SCENARIO_2", "Second Test Scenario")
PASSED: SCENARIO_3("SCENARIO_3", "Third Test Scenario")
PASSED: First Test Scenario("SCENARIO_1", "First Test Scenario")
PASSED: Second Test Scenario("SCENARIO_2", "Second Test Scenario")
PASSED: Third Test Scenario("SCENARIO_3", "Third Test Scenario")

The one gotcha in all of this is that the reports that are generated for the results are generated from the XML that TestNG generates.

The XML uses the name we generate and puts it in what's called the "instance name" attribute. There are many different HTML versions of the reports and some of them correctly use the instance name and some of them don't. The HTML reports run on Jenkins correctly use the instance names and show the tests with that as their name. The default HTML reports that get generated will only run it from eclipse don't use the instance name correctly. The eclipse console does correctly use the instance name thus we can see it there.
Some HTML reports will show it in some will not.

Thursday, July 22, 2010

Java now has Objects

As far as I'm concerned, Java has always been a "Nearly Object Oriented" language. When translating a model into real code you end up specifying the properties of the object and the methods with the business logic and translating that into fields, methods with the business logic, and methods to make it possible to relate fields like properties.

Eclipse makes much of this easier with various code generation plug-ins and features yet, despite that, I have wasted hours writing, correcting, and and maintaining infrastructure methods such as equals(), getters/setters, hashValue(), toString() .

All that has shifted with the addition of Lombok.

http://projectlombok.org/

Project Lombok uses Java 5 annotations in combination with byte code generation to allow compile time generation of all of the infrastructure methods to have fields become properties.

For example:

You can use the @Data annotation on a class to automatically generate the toString, hashCode, equals, and getters for all fields and setters for all nonfinal fields. It will also generate a free constructor to initialize your final fields.

Voila ! Instant domain object.

If you want you can break these things down in a more à la carte manner by using annotations such as:

@Getter / @Setter
@ToString
@EqualsAndHashCode

It also has some annotations I have not yet played with or evaluated:

@Cleanup - Automatic resource management: Call your close() methods safely with no hassle.
@Synchronized - synchronized done right: Don't expose your locks.
@SneakyThrows - To boldly throw checked exceptions where no one has thrown them before!


To use it on an Ant based compile you simply include it in your class path. To use an eclipse you need to run the installer and point it at the eclipse installation in question. It will modify the eclipse.ini file so that the runtime compile of Eclipse will automatically process the annotations.

To sum up:

PROS:

  • Radically simplifies my life when dealing with business domain objects
  • Byte code generation is nicely hidden and transparent.
  • Simple to use for command line builds.

CONS:

  • Eclipse install requires additional work and documentation if you are documenting your build environment for someone else.

UNKNOWNS:

  • How well it plays with tools like AspectJ.

Tuesday, February 9, 2010

Build Tools - An incomplete paradigm - part 1

Dependency Management, Provisioning, and Repositories

I love automated build systems. I like letting the computer do the same thing over and over so I don't have to. In some respects I think we have started to come out of the dark ages of programming in that more people think in terms of continuous builds and unit testing then don't.

I didn't say that everybody does it, but it is a vocabulary that everybody has and can speak, even if they choose not to use it. 5 to 7 years ago the average opinion was that automated builds were overkill that only the wealthiest companies could waste time on and unit testing, while laudable, was considered something that most people didn't have time for.

Nowadays, when I interview, it is rare for me to encounter company that does not have an automated build system and some flavor of testing the code.

And given the quality of open source tools available for most of these tasks is rare that an enterprise has to bother spending money on the tools.

In one of my previous posts Ivy vs Maven, I mentioned that I do most of my dependency management using Ivy. It allows me to simply specify a dependency such as version 7.12 of DB4O And it will retrieve all the other dependencies that DB4O needs. A few put pointers to Ivy related material are below:

Automation for the people: Manage dependencies with Ivy

http://ant.apache.org/ivy/

It also has an eclipse plug-in so that the dependencies that you specify in Ivy are used by Eclipse in your projects.

IvyDE plugin. http://ant.apache.org/ivy/ivyde/download.cgi

In my world the advantage that Ivy has over Maven is that it's not tied to Maven's project structure the way Maven's dependency manager is. I can use it with Ant easily and powerfully. Maven on the other hand has a number of built in design assumptions ( such as thou shalt only generate one artifact (jar, zip, etc...) per project. If you are able to design your project from scratch and have it fit into the Maven project structure and design assumptions AND you don't have any need to do any coding of extensions to Maven then I would recommend using Maven. If not, I would recommend Ivy and Ant together!

These are all things that myself and others have all said before.

There are two areas of automated build systems though that often get overlooked. One I consider a solved problem and the other I consider a a royal pain in the butt. Those two problems are Artifact/Metadata repositories and the build system provisioning.

Let's talk about the solved one. Artifact/Metadata repositories are the storage area that Maven and Ivy go to to get metadata on dependencies as well as actually retrieves the artifacts themselves. The Apache Ivy project does not maintain any repositories themselves but they are coming able to talk with the Maven repositories. The Apache Maven project (or some related group of people) do maintain a repository. Of course, like many volunteer manned projects, the coverage of metadata and artifacts can be spotty at times. Overall though, it is an incredible gift that these volunteers give us.

Now we get to the steamy underside: Many open source projects for one reason or another, are unable or unwilling their artifacts and metadata in the Maven repositories. For some, like Google, it appears that many of the projects are not published to the Maven repository simply because their build system doesn't mesh well with the Maven toolset so additional work would be required to post these artifacts and metadata during each release. For others, it is ideological. For those projects they are avoiding going to Maven for a build system and avoid doing anything to support the Maven "ecosystem".

So this means that many artifacts and metadata about those artifacts are not available out of the box when you are using the Maven or Ant+Ivy build system.

In addition , even if all the artifacts are in the Maven run repositories, there is no guarantee that they will be available. There are many times in a week where the Maven repositories may experience slowdowns.

So I started by describing this is a solved problem. This is why: there are a set of companies out there that have put together repository software such as Nexus and Artifactory that act as repositories as well as proxies for other repositories.

At home on my build server I am running a copy of Nexus. I use Nexus primarily because when I first tried out Maven repositories Nexus was much more mature than Artifactory, I haven't revisited them in a while simply because I haven't run into anything that Nexus can't handle well.

Installation and running of Nexus is straightforward (At least on my Ubuntu server). It is already pointed to the key Maven repositories in typical use. All that needs to be done otherwise is to point your Maven or Ivy installation at the Nexus repository rather than the individual Maven repositories.

Pointing your Ivy installation at Nexus is as simple as adding the following line to the ivy-settings.xml:


<ibiblio name="nexus" m2compatible="true" root="http://kukri:8081/nexus/content/groups/public">



Which tells Ivy to use the ibiblio resolver (Ibiblio was the website that provided the first maven repository) and to assume that the repository is Maven 2 compatible.

After the 1st time a build system is used, those artifacts are downloaded by Nexus to the local Nexus repository and are available from then on without regard to whether or not the original Maven repositories are available.

I think that it is obvious that if you are going to use dependency management as part of the build process in the enterprise, you need something like Nexus that you are not at the mercy of Internet connectivity and website availability for your builds.

Does it cure everything? No. It is still a minor annoyance to deal with those artifacts that are not managed as part of any Maven repository and have to be manually uploaded to Nexus. It is not the upload process that is annoying, Nexus makes that easy, the real headache is keeping track of those artifacts that you may need to do this with.

Of course, once you have an enterprise artifact repository like Nexus installed you can just back that up.

Overall, in the business/commercial enterprise world, this solution works well as is. In the next posting I will discuss where this set of solutions is inadequate for a real world open source problem.

Friday, February 27, 2009

Build Systems : Ant versus Maven

Ever since I discovered Make (25+ years ago) I have been searching for a good build system. I have used everything from Configure and Make (talk about icing on a mud pie) to JAM and now Ant and Maven.

I keep going back and trying Maven again when they do a new release. It is such a good idea that I keep going back in the hopes that the implementation and documentation will finally live up to that promise. And I think a good number of people stay with Maven because it is such a good idea that they persevere and endure the slings and arrows of outrageous documentation and implementation. Alas, each time I come away frustrated.

The Maven repository concept is pure genius. And in fact, the implementation works well enough that I use it in connection with Apache's Ivyto do dependency management.

What is Ivy? A set of dependency management tasks used by Ant to pull down and access the appropriate jar or other dependencies needed by your project. I won't go into a tutorial about Ivy since there are more than a few out there. But the project itself is available at http://ant.apache.org/ivy/. it does suffer from some of the same documentation issues that Maven does but between the online forums and other peoples blog posts you can usually figure something out pretty quickly.

In the following few posts I will be discussing how I use Ivy in connection with Ant to produce a fairly clean build system with minimal bootstrap requirements.

Thursday, February 26, 2009

Discovering unused code in Java

When I set out to track down unused code in Java I came across a large number of static analysis tools that all seemed to do the job fairly well.

I used several of them including a fairly good eclipse plug-in called UCDetector. It was not fast, but it was very thorough.

By using those tools we were able to remove the obviously unused code. That resulted in a nontrivial shrinkage of about 30%. Unfortunately, due to the fact that much of the code gets called via Java's reflection API, There is a large amount of code that is not so obviously unused.

Since we have a UI test suite I thought that we would run the test suite against the front end and then log or track the methods that are actually called in the back end. Than we could eliminate the methods we found that were unused.

I first tried using the JDI interface of the JVM and simply remotely log each entrance into a method ( I didn't care about the exit). Unfortunately that slowed the Backend server system to a crawl. It would have taken weeks to get the data we needed.

I've tried both AspectJ and JBOSS AOP to produce a logging overlay and ran into significant problems deploying those in the older JBOSS 4.0.5GA environment. This was not significantly improved by the relatively nonstandard nature of our deployables.

Finally, we struck a gold mine. By using the YourKit profiler, which had minimal performance overhead, we were able to get the list of method calls that had been made. What made it especially easy was the profiler had a feature that would allow me to generate a dump of the call tree once an hour.

Here is the address of the profiler people: http://www.yourkit.com/

I just want to note that we were able to do all this using the evaluation version and that it easily passed a five minute test (. In other words, we were able to get it up and doing real work in five minutes). We have already ordered a copy.

Of course, taking 48 hours of those dumps and manually exporting them to CSV was a royal pain. To the YourKit guys: that is a hint.