Tuesday, November 5, 2013

Log4j and MDC in Grails

Log4j provides very useful feature: MDC - mapped diagnostic context. It can be used to store data in context of current thread. It may sound scary a bit but idea is simple.

My post is based on post http://burtbeckwith.com/blog/?p=521 from Burt Beckwith's excellent blog, it's definitely worth checking if you are interested in Grails.

Short background story...


Suppose we want to do logging our brand new shopping system and we want to have in each log customer's shopping basket number. And our system can be used at once by many users who can perform many transactions, actions like adding items and so on. How can we achieve that? Of course we can add basket number in every place where we do some logging but this task would be boring and error-prone. 

Instead of this we can use MDC to store variable with basket number in map. 

In fact MDC can be treated as map of custom values for current thread that can be used by logger. 


How to do that with Grails?


Using MDC with Grails is quite simple. All we need to do is to create our own custom filter which works for given urls and puts our data in MDC.

Filters in Grails are classes in directory grails-app/conf/* which names end with *Filters.groovy postfix. We can create this class manually or use Grails command: 
grails create-filters info.rnowak.App.Basket

In result class named BasketFilters will be created in grails-app/conf/info/rnowak/UberApp.

Initially filter class looks a little bit empty:
class BasketFilters {
    def filters = {
        all(controller:'*', action:'*') {
            before = {

            }
            after = { Map model ->

            }
            afterView = { Exception e ->

            }
        }
    }
}
All we need to do is fill empty closures, modify filter properties and put some data into MDC.

all is the general name of our filter, as class BasketFilters (plural!) can contain many various filters. You can name it whatever you want, for this post let assume it will be named basketFilter

Another thing is change of filter parameters. According to official documentation (link) we can customize our filter in many ways. You can specify controller to be filtered, its actions, filtered urls and so on. In our example you can stay with default option where filter is applied to every action of every controller. If you are interested in filtering only some urls, use uri parameter with expression describing desired urls to be filtered.

Three closures that are already defined in template have their function and they are started in these conditions:

  • before - as name says, it is executed before filtered action takes place
  • after - similarly, it is called after the action
  • afterView - called after rendering of the actions view
Ok, so now we know what are these mysterious methods and when they are called. But what can be done within them? In official Grails docs (link again) under section 7.6.3 there is a list of properties that are available to use in filter.

With that knowledge, we can proceed to implementing filter.

Putting something into MDC in filter


What we want to do is quite easy: we want to retrieve basket number from parameters and put it into MDC in our filter:
class BasketFilters {
    def filters = {
        basketFilter(controller:'*', action:'*') {
            before = {
                MDC.put("basketNumber", params.basketNumber ?: "")
            }
            after = { Map model ->
                MDC.remove("basketNumber")
            }
        }
    }
}

We retrieve basket number from Grails params map and then we put in map under specified key ("basketNumber" in this case), which will be later used in logger conversion pattern. It is important to remove custom value after processing of action to avoid leaks.

So we are putting something into MDC. But how make use of it in logs?


We can refer to custom data in MDC in conversion patter using syntax: %X{key}, where key is our key we used in filter to put data, like:
def conversionPattern = "%d{yyyy-MM-dd HH:mm:ss} %-5p %t [%c{1}] %X{basketNumber} - %m%n"


And that's it :) We've put custom data in log4j MDC and successfully used it in logs to display interesting values.

Tuesday, September 17, 2013

Grails with Spock unit test + IntelliJ IDEA = No thread-bound request found

During my work with Grails project using Spock test in IntelliJ IDEA I've encountered this error:

java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
 at org.springframework.web.context.request.RequestContextHolder.currentRequestAttributes(RequestContextHolder.java:131)
 at org.codehaus.groovy.grails.plugins.web.api.CommonWebApi.currentRequestAttributes(CommonWebApi.java:205)
 at org.codehaus.groovy.grails.plugins.web.api.CommonWebApi.getParams(CommonWebApi.java:65)
... // and few more lines of stacktrace ;)

It occurred when I tried to debug one of test from IDEA level. What is interesting, this error does not happen when I'm running all test using grails test-app for instance.

So what was the issue? With little of reading and tip from Tomek Kalkosiński (http://refaktor.blogspot.com/) it turned out that our test was missing @TestFor annotation and adding it solved all problems.

This annotation, according to Grails docs (link), indicates Spock what class is being tested and implicitly creates field with given type in test class. It is somehow strange as problematic test had explicitly and "manually" created field with proper controller type. Maybe there is a problem with mocking servlet requests?

Saturday, September 7, 2013

Spock basics

Spock (homepage) is like its authors say 'testing and specification framework'. Spock combines very elegant and natural syntax with the powerful capabilities. And what is most important it is easy to use.

One note at the very beginning: I assume that you are already familiar with principles of Test Driven Development and you know how to use testing framework like for example JUnit.

So how can I start?


Writing spock specifications is very easy. We need basic configuration of Spock and Groovy dependencies (if you are using mavenized project with Eclipse look to my previous post: Spock, Java and Maven). Once we have everything set up and running smooth we can write our first specs (spec or specification is equivalent for test class in other frameworks like JUnit of TestNG).

What is great with Spock is fact that we can use it to test both Groovy projects and pure Java projects or even mixed projects.


Let's go!


Every spec class must inherit from spock.lang.Specification class. Only then test runner will recognize it as test class and start tests. We will write few specs for this simple class: User class and few tests not connected with this particular class.

We start with defining our class:
import spock.lang.*

class UserSpec extends Specification {

}
Now we can proceed to defining test fixtures and test methods.

All activites we want to perform before each test method, are to be put in def setup() {...} method and everything we want to be run after each test should be put in def cleanup() {...} method (they are equivalents for JUnit methods with @Before and @After annotations).

It can look like this:
class UserSpec extends Specification {
    User user
    Document document

    def setup() {
        user = new User()
        document = DocumentTestFactory.createDocumentWithTitle("doc1")
    }

    def cleanup() {

    }
}
Of course we can use field initialization for instantiating test objects:
class UserSpec extends Specification {
    User user = new User()
    Document document = DocumentTestFactory.createDocumentWithTitle("doc1")

    def setup() {

    }

    def cleanup() {

    }
}

What is more readable or preferred? It is just a matter of taste because according to Spock docs behaviour is the same in these two cases.

It is worth mentioning that JUnit @BeforeClass/@AfterClass are also present in Spock as def setupSpec() {...} and def cleanupSpec() {...}. They will be runned before first test and after last test method.


First tests


In Spock every method in specification class, expect setup/cleanup, is treated by runner as a test method (unless you annotate it with @Ignore).

Very interesting feature of Spock and Groovy is ability to name methods with full sentences just like regular strings:
class UserSpec extends Specification {
    // ...

    def "should assign coment to user"() {
        // ...
    }
}
With such naming convention we can write real specification and include details about specified behaviour in method name, what is very convenient when reading test reports and analyzing errors.

Test method (also called feature method) is logically divided into few blocks, each with its own purpose. Blocks are defined like labels in Java (but they are transformed with Groovy AST transform features) and some of them must be put in code in specific order.

Most basic and common schema for Spock test is:
class UserSpec extends Specification {
    // ...

    def "should assign coment to user"() {
        given:
            // do initialization of test objects
        when:
            // perform actions to be tested
        then:
            // collect and analyze results
    }
}

But there are more blocks like:
  • setup
  • expect
  • where
  • cleanup
In next section I am going to describe each block shortly with little examples.

given block

This block is used to setup test objects and their state. It has to be first block in test and cannot be repeated. Below is little example how can it be used:
class UserSpec extends Specification {
    // ...
    
    def "should add project to user and mark user as project's owner"() {
        given:
            User user = new User()
            Project project = ProjectTestFactory.createProjectWithName("simple project")
        // ...
    }
}

In this code given block contains initialization of test objects and nothing more. We create simple user without any specified attributes and project with given name. In case when some of these objects could be reused in more feature methods, it could be worth putting initialization in setup method.

when and then blocks

When block contains action we want to test (Spock documentation calls it 'stimulus'). This block always occurs in pair with then block, where we are verifying response for satisfying certain conditions. Assume we have this simple test case:
class UserSpec extends Specification {
    // ...
    
    def "should assign user to comment when adding comment to user"() {
        given:
            User user = new User()
            Comment comment = new Comment()
        when:
            user.addComment(comment)
        then:
            comment.getUserWhoCreatedComment().equals(user)
    }

    // ...
}

In when block there is a call of tested method and nothing more. After we are sure our action was performed, we can check for desired conditions in then block.

Then block is very well structured and its every line is treated by Spock as boolean statement. That means, Spock expects that we write instructions containing comparisons and expressions returning true or false, so we can create then block with such statements:
user.getName() == "John"
user.getAge() == 40
!user.isEnabled()
Each of lines will be treated as single assertion and will be evaluated by Spock.

Sometimes we expect that our method throws an exception under given circumstances. We can write test for it with use of thrown method:
class CommentSpec extends Specification {
    def "should throw exception when adding null document to comment"() {
        given:
            Comment comment = new Comment()
        when:
            comment.setCommentedDocument(null)
        then:
            thrown(RuntimeException)
    }
}

In this test we want to make sure that passing incorrect parameters is correctly handled by tested method and that method throws an exception in response. In case you want to be certain that method does not throw particular exception, simply use notThrown method.


expect block

Expect block is primarily used when we do not want to separate when and then blocks because it is unnatural. It is especially useful for simple test (and according to TDD rules all test should be simple and short) with only one condition to check, like in this example (it is simple but should show the idea):
def "should create user with given name"() {
    given:
        User user = UserTestFactory.createUser("john doe")
    expect:
        user.getName() == "john doe"
}



More blocks!


That were very simple tests with standard Spock test layout and canonical divide into given/when/then parts. But Spock offers more possibilities in writing tests and provides more blocks.


setup/cleanup blocks

These two blocks have the very same functionality as the def setup and def cleanup methods in specification. They allow to perform some actions before test and after test. But unlike these methods (which are shared between all tests) blocks work only in methods they are defined in. 


where - easy way to create readable parameterized tests

Very often when we create unit tests there is a need to "feed" them with sample data to test various cases and border values. With Spock this task is very easy and straighforward. To provide test data to feature method, we need to use where block. Let's take a look at little the piece of code:

def "should successfully validate emails with valid syntax"() {
    expect:
        emailValidator.validate(email) == true
    where:
        email << [ "test@test.com", "foo@bar.com" ]
}

In this example, Spock creates variable called email which is used when calling method being tested. Internally feature method is called once, but framework iterates over given values and calls expect/when block as many times as there are values (however, if we use @Unroll annotation Spock can create separate run for each of given values, more about it in one of next examples).

Now, lets assume that we want our feature method to test both successful and failure validations. To achieve that goal we can create few 
parameterized variables for both input parameter and expected result. Here is a little example:

def "should perform validation of email addresses"() {
    expect:
        emailValidator.validate(email) == result
    where:
        email << [ "WTF", "@domain", "foo@bar.com" "a@test" 
        result << [ false, false, true, false ]
}
Well, it looks nice, but Spock can do much better. It offers tabular format of defining parameters for test what is much more readable and natural. Lets take a look:
def "should perform validation of email addresses"() {
    expect:
        emailValidator.validate(email) == result
    where:
        email           | result
        "WTF"           | false
        "@domain"       | false
        "foo@bar.com"   | true
        "a@test"        | false
}
In this code, each column of our "table" is treated as a separate variable and rows are values for subsequent test iterations.

Another useful feature of Spock during parameterizing test is its ability to "unroll" each parameterized test. Feature method from previous example could be defined as (the body stays the same, so I do not repeat it):
@Unroll("should validate email #email")
def "should perform validation of email addresses"() {
    // ...
}
With that annotation, Spock generate few methods each with its own name and run them separately. We can use symbols from where blocks in @Unroll argument by preceding it with '#' sign what is a signal to Spock to use it in generated method name.


What next?


Well, that was just quick and short journey  through Spock and its capabilities. However, with that basic tutorial you are ready to write many unit tests. In one of my future posts I am going to describe more features of Spock focusing especially on its mocking abilities.

Saturday, August 10, 2013

Integration tests with Maven and JUnit

There is no doubt that integration tests phase is crucial in modern applications development. We need to test behaviour of our subsystems and how they interact with other modules.

Using JUnit and Maven it's quite easy to create integration tests and run them in separate phase than unit test. It is very important, because integration tests tend to take much more time than unit ones because they work with database, network connections, other subsystems etc. Therefore, we want to run them more rarely.

With JUnit in version >= 4.8 there are two approaches for creating and running integration test:
  • using naming conventions and specifying separate executions for maven-surefire plugin
  • create marking interface and mark integration tests with @Category annotation and run test from failsafe-plugin (although it is possible to use surefire in both cases)

Separate executions


First method needs naming convention like naming all unit tests with "..Test.java" postfix (or "..Spec.groovy" ;) and integration tests with "..IntegrationTest.java". Then we need to change maven surefire configuration:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.15</version>
    <configuration>
        <skip>true</skip>    
    </configuration>
</plugin>
What I did here is forcing maven to skip default test phase. Instead of that, I will configure two separate executions (just below the <configuration> section):
<executions>
    <execution>
        <id>unit-tests</id>
        <phase>test</phase>
        <goals>
            <goal>test</goal>
        </goals>
        <configuration>
            <skip>false</skip>
            <includes>
                <include>**/*Test.class</include>
                <include>**/*Spec.class</include>
            </includes>
            <excludes>
                <exclude>**/*IntegrationTest.class</exclude>
            </excludes>
        </configuration>
    </execution>
    <execution>
        <id>integration-tests</id>
        <phase>integration-test</phase>
        <goals>
            <goal>test</goal>
        </goals>
        <configuration>
            <skip>false</skip>
            <includes>
                <include>**/*IntegrationTest.class</include>
            </includes>
        </configuration>
    </execution>
</executions>
In unit test execution I include all test that match naming convention for unit tests (both JUnit and spock ones) and exclude files matching integration test pattern and in integration test execution I did something opposite ;)


Annotations

Another method requires defining of marking interface like this:

package info.rnowak.webtex.common.test;

public interface IntegrationTest {

}
Then we can mark our integration test class with:
@Category(IntegrationTest.class)
Next thing is changing of surefire plugin configuration to omit integration test:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.15</version>
    <configuration>
        <includes>
            <include>**/*Test.class</include>
            <include>**/*Spec.class</include>
        </includes>  
        <excludedGroups>info.rnowak.webtex.common.test.IntegrationTest</excludedGroups> 
    </configuration>
</plugin>
What has changed here is new <excludedGroups> tag with name of interface which marks our integration tests.
Next, we need to add and configure maven-failsafe plugin in order to run test from out integration test group:
<plugin><plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.15</version>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
            </goals>
            <configuration>
                <groups>info.rnowak.webtex.common.test.IntegrationTest</groups>
                <includes>
                    <include>**/*.class</include>
                </includes>
            </configuration>
        </execution>
    </executions>
</plugin>
With this configuration failsafe will run only test marked with @Category(IntegrationTest.class) annotation, no matter what their names are.


What is better?


Well, in my opinion it's just a matter of taste and style. Annotating each integration class may be a little cumbersome but we are not limited to naming classes within specified convention. On the other hand, unit test and integration test usually are named with some convention, so annotations are not a big deal.

Unable to instantiate default tuplizer

I wrote few hbm mappings for domain classes in my recent project, and I got exception like that:
org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer]
Of course my first thought was googling for it and I found interesting answers. Most commons causes of this exception are:
  • missing getters or setters, what's more, even a typo or wrong letter case (like getParentproject instead of getParentProject when field in class and mapping file is defined as parentProject)
  • missing default constructor
  • missing dependency for javassist library

My files seemed to be correctly defined, so it had to be missing dependency.
To fix it I've added these lines to my pom.xml:
<dependency>
    <groupId>org.javassist</groupId>
    <artifactId>javassist</artifactId>
    <version>3.18.0-GA</version>
</dependency>

Well, it shouldn't be a surprise because in full stacktrafe from this error there is an entry:
java.lang.ClassNotFoundException: javassist.util.proxy.MethodFilter
What explicitly indicates where is the root of this problem ;)

(And BTW: in my recent project I'm stuck with quite old version of Hibernate - 3.6.3)

Thursday, August 8, 2013

Spock, Java and Maven

Few months ago I've came across Groovy - powerful language for JVM platform which combines the power of Java with abilities typical for scripting languages (dynamic typing, metaprogramming).

Together with Groovy I've discovered spock framework (https://code.google.com/p/spock/) - specification framework for Groovy (of course you can test Java classes too!). But spock is not only test/specification framework - it also contains powerful mocking tools.

Even though spock is dedicated for Groovy there is no problem with using it for Java classes tests. In this post I'm going to describe how to configure Maven project to build and run spock specifications together with traditional JUnit tests.


Firstly, we need to prepare pom.xml and add necessary dependencies and plugins.

Two obligatory libraries are:
<dependency>
    <groupid>org.spockframework</groupId>
    <artifactid>spock-core</artifactId>
    <version>0.7-groovy-2.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupid>org.codehaus.groovy</groupId>
    <artifactid>groovy-all</artifactId>
    <version>${groovy.version}</version>
    <scope>test</scope>
</dependency>
Where groovy.version is property defined in pom.xml for more convenient use and easy version change, just like this:
<properties>
    <gmaven-plugin.version>1.4</gmaven-plugin.version>
    <groovy.version>2.1.5</groovy.version>
</properties>

I've added property for gmaven-plugin version for the same reason ;)

Besides these two dependencies, we can use few additional ones providing extra functionality:
  • cglib - for class mocking
  • objenesis - enables mocking classes without default constructor
To add them to the project put these lines in <dependencies> section of pom.xml:
<dependency>
    <groupid>cglib</groupId>
    <artifactid>cglib-nodep</artifactId>
    <version>3.0</version>
    <scope>test</scope>
</dependency>
<dependency>
    <groupid>org.objenesis</groupId>
    <artifactid>objenesis</artifactId>
    <version>1.3</version>
    <scope>test</scope>
</dependency>

And that's all for dependencies section. Now we will focus on plugins necessary to compile Groovy classes. We need to add gmaven-plugin with gmaven-runtime-2.0 dependency in plugins section:
<plugin>
    <groupid>org.codehaus.gmaven</groupId>
    <artifactid>gmaven-plugin</artifactId>
    <version>${gmaven-plugin.version}</version>
    <configuration>
        <providerselection>2.0</providerSelection>
    </configuration>
    <executions>
        <execution>
            <goals>
                <goal>compile</goal>
                <goal>testCompile</goal>
            </goals>
        </execution>
    </executions>
    <dependencies>
        <dependency>
            <groupid>org.codehaus.gmaven.runtime</groupId>
            <artifactid>gmaven-runtime-2.0</artifactId>
            <version>${gmaven-plugin.version}</version>
            <exclusions>
                <exclusion>
                    <groupid>org.codehaus.groovy</groupId>
                    <artifactid>groovy-all</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupid>org.codehaus.groovy</groupId>
            <artifactid>groovy-all</artifactId>
            <version>${groovy.version}</version>
        </dependency>
    </dependencies>
</plugin>

With these configuration we can use spock and write our first specifications. But there is one issue: default settings for maven-surefire plugin demand that test classes must end with "..Test" postfix, which is ok when we want to use such naming scheme for our spock tests. But if we want to name them like CommentSpec.groovy or whatever with "..Spec" ending (what in my opinion is much more readable) we need to make little change in surefire plugin configuration:
<plugin>
    <groupid>org.apache.maven.plugins</groupId>
    <artifactid>maven-surefire-plugin</artifactId>
    <version>2.15</version>
    <configuration>
        <includes>
            <include>**/*Test.java</include>
            <include>**/*Spec.java</include>
        </includes>
    </configuration>
</plugin>

As you can see there is a little trick ;) We add include directive for standard Java JUnit test ending with "..Test" postfix, but there is also an entry for spock test ending with "..Spec". And there is a trick: we must write "**/*Spec.java", not "**/*Spec.groovy", otherwise Maven will not run spock tests (which is strange and I've spent some time to figure out why Maven can't run my specs).

Little update: instead of "*.java" postfix for both types of tests we can write "*.class" what is in my opinion more readable and clean:
<include>**/*Test.class</include>
<include>**/*Spec.class</include>
(thanks to Tomek Pęksa for pointing this out!)

With such configuration, we can write either traditional JUnit test and put them in src/test/java directory or groovy spock specifications and place them in src/test/groovy. And both will work together just fine :) In one of my next posts I'll write something about using spock and its mocking abilities in practice, so stay in tune.

System.out.println("Hello world!")

Welcome to my blog!

Yes, it's going to be tech-blog about programming and so on.
Yes, it's going to focus on Java.
But does it mean that it will be boring? I don't think so ;)


I'm going to write mostly about Java and Groovy languages. Of course there will appear some notes about popular frameworks like Spring or Grails.

From time to time I'll post something about C++ as it is still one of my favorites among programming languages, even if it's not as fancy and popular as its modern opponents. 

I'm just a student and junior programmer and I'm still new to the world of enterprise applications programming, so most of post are going to be tips&tricks about technologies, stuff I've learned or discovered and texts about problems I've encountered during my work.