Wednesday, October 14, 2015

Getting started with Haskell, stack and spacemacs

It has been very long time since my last blog post. During this period I have become big enthusiast of functional programming, especially using Haskell language. In this and next posts I am going to show that Haskell can be very pleasant to use and with proper tools we are able to develop applications without unnecessary burden.

Recently, many useful tools and editors emerged and they are really easy and convenient to use. In this post I intend to present toolchain that I am using in my everyday Haskell programming.

This post is not an introduction to Haskell language. It is meant to describe how to setup Haskell with stack build tool and spacemacs as an editor. I am also planning to write a post about Haskell basics and its usage in my little project in series of the next posts.

The only necessary prerequisite is having the most recent version of Emacs installed on your system.

New project build/management tool - stack

Managing dependencies and build process is always a gruesome task and there are many tools to ease this work. In Haskell most popular dependency management tool is cabal. It is based on Hackage repository (https://hackage.haskell.org). 

One of the desired features of build tools are reproducible builds. We would like to build project in new environment or on the new developer’s machine and have the same outcome in every situation. This would require same compiler version, same libraries etc. 

Lately, new tool came out - stack (https://github.com/commercialhaskell/stack). It is aimed at reproducible builds and simple project management. Stack takes care of proper configuration of your project environment. 

Stack achieves reproducible builds by using curated snapshot packages managed by special versioned resolvers. It uses cabal as a package manager. Packages are grouped into resolvers. There are two types of resolvers: LTS (long term support) and nightly. The latter contains packages in fresh version but there is also a drawback - potential instability. On the other hand, LTS resolvers contain fixed version of packages which are tested and should not cause any problems. If you are not in need of using latest packages version, LTS resolver should entirely satisfy your project needs.

What is more, stack can also download and setup locally Haskell compiler in version required by your project.

stack in action

Using stack to create new project is really easy. After installing it on our machine (description of installation is included in documentation on project's GitHub page I linked above), all we need to do to create new project is execute below steps in our terminal:
stack new hello-haskell
cd hello-haskell
stack setup
stack build
stack exec hello-haskell-exe
These commands create new project with name hello-haskell. stack setup initialises environment, install compiler (if it will be required) and necessary libraries for project. stack build builds and compiles project. stack exec … executes executable program built earlier.

If you would like to play around with your project's code you should type stack ghci in your terminal - this will launch Haskell interactive console - ghci - in version specified in project configuration.

Another stack command worth mentioning is stack test which executes test suites declared in test/ directory.

Dependencies and project settings are placed in hello-haskell.cabal file. It is standard cabal configuration file where we can add desired libraries, set project version, licence, link to the repository and so on. I suggest reading some cabal documentation if you have any doubts but in my opinion this file is very easy and straightforward to edit.

Settings specific for stack are placed in stack.yaml file. Most important option is resolver - which influences version of GHC compiler and libraries your project will be using.

There is one thing you might encounter while setting up project dependencies. What if you need library that is not present in any of stack resolvers? Well, then we must go to stack.yaml file and edit or add section:
extra-deps:
- Vec-1.0.5
With this information stack will download and build desired package from hackage repositories. In my case I needed Vec library so I added it on a list with full name containing version number.

All details and gotchas are described in stack’s Wiki on GitHub. Be sure to check it out frequently as stack is still very young tool and it can change quite often. Documentation is very strong point of stack as it describes very well many aspects of its usages.


Powerful editor in new edition - spacemacs

I have spent a lot of time searching for editor that is easy to use with Haskell and that integrates well with its tools like REPL. I’ve been working with Sublime Text for some time as it is integrated quite well with Haskell when using SublimeHaskell package. However, recently I’ve discovered spacemacs project.

spacemacs (https://github.com/syl20bnr/spacemacs) is a easy-to-use kit for Emacs focused on ergonomics. What is great about it is that it embraces Evil mode of Emacs which mimics Vim-style editing and document navigation. With this feature spacemacs is really straightforward for users which know Vim. It is also possible to mix Vim and Emacs style in the same time.

In my opinion, it is really great feature as we can use this editor in the way we like more or is more convenient to us. Whether we are Vim-lovers or Emacs-fans or we want to mix them both - spacemacs allows to work in whatever style we like. I personally use mostly Vim-like mode with only few of original Emacs commands and with spacemacs shortcuts for many actions.

spacemacs is based on layers which add additional functionalities to editor. It can enrich our development environment with syntax completion, git integration, code completion and integration with build tools for many languages.

One of these layers is haskell layer. It supports this language quite well with syntax checking, code suggestions, built-in REPL and code templates for common patterns.

I refer to the official documentation for detailed installation instruction on various platform. After we are ready and spacemacs is on our disk, we can proceed.

Entire spacemacs configuration is placed in .spacemacs file in your home directory. This file is written in Lisp-like language and contains many options to change or add. Here is my current .spacemacs file on what this post section is based: 
https://gist.github.com/rafalnowak/202aba0ee7986515345b

In dotspacemacs-configuration-layers we need to add haskell layer (I also recommend setting auto-completion and syntax-checking layers as well). In order to get layer to work properly, we need to install some additional packages:
stack install stylish-haskell hlint hasktags
Next step is adding these two settings to .spacemacs just after text ;; User initialization goes here:
(add-hook 'haskell-mode-hook 'turn-on-haskell-indentation)
(add-to-list 'exec-path "~/.local/bin/")
It makes spacemacs aware of Haskell indentation style and adds binaries installed by stack to path. It is important as we want to make our editor able to run Haskell tools. 

Full description, as well as platform specific problems, are listed in Haskell layer documentation: https://github.com/syl20bnr/spacemacs/tree/master/layers/%2Blang/haskell There is also a list of useful shortcuts used by this layer.

One essential note: if you wish to use spacemacs with ghc-mod integration, you will need to install ghc-mod at least in version 5.4.0.0. Previous versions do not work properly with Haskell layer and stack. To install ghc-mod in this version you must add cabal-helper-0.6.1.0 to your extra-deps section in stack.yaml and run 
stack install ghc-mod
Which should proceed now without problems.  

After this configuration we are ready to use all power of Haskell and stack in our projects. We will also have solid support from editor. If you have followed steps above, you will see that spacemacs is colouring Haskell syntax, checking its correctness and giving you code completion tips. There is also interactive console for Haskell available under SPC m s s keys combination which makes quick testing of new functions possible. 

Unfortunately, there are some disadvantages of spacemacs. For me, the biggest drawdown is its responsivity. Sometimes during code completion or syntax checking it can hang application for a second or less.


Summary

As we could see, Haskell with stack and spacemacs is really powerful yet still simple to use. With stack we can achieve reproducible builds with specific compiler and libraries versions as well as easy project management. spacemacs allows us to create code quickly with support for Haskell syntax, build tools and code completion.

In my next post I am going to describe my experiences with my first bigger Haskell project - functional ray tracer I have been working on recently - https://github.com/rafalnowak/RaytracaH

Thursday, June 5, 2014

Spring Boot and AngularJS quick start

In this post I am going to show very simple and quick example of web application using Spring Boot with AngularJS. This app contains simple functionality of sending and storing imaginary messages. I've also used gradle for build management. All code is public and it is available on my github: https://github.com/rafalnowak/spring-boot-fun

Introduction to Spring Boot

Spring Boot is quite new project created under Spring Source umbrella. It was very few months ago when it reached version 1.0 and status of general availability. 

Most important and prominent goals of this projects are:
  • providing ability to create simple web apps very quickly
  • minimizing amount of XML codebloat which is usually necessary to configure every Spring application
  • most of app configuration is automatical
  • simplify running and deployment process by using embedded Tomcat or Jetty servers that can run our applications without special effort and deploy process
  • there are lot of so called spring boot starters which are packages containing default configuration for various fields of Spring like database access by JPA, aspect oriented programming or security
As we can see, it looks promising. In this post I'll show few basic steps necessary to create and boot simple Spring Boot web application.

First steps

Although Spring Boot can be used with special command line interface tools, I've decided to use it with very popular gradle build system.

Spring Boot comes with plugins to integrate with maven or gradle. They allow us to easily run application in embedded server. Necessary instructions to include these plugin are shown on snippet below:
buildscript {
    repositories {
        mavenCentral()
    }

    dependencies {
        classpath("org.springframework.boot:spring-boot-gradle-plugin:1.0.1.RELEASE")
    }
}
With this basic config we can proceed to next steps. In my sample project I've divided application into two modules: one contains persistence layer with domain object and JPA repositories and another contains presentation layer with controllers. Of course this completely optional and in such simple project it does not add any benefits. But it can show how to create multi module project in gradle. Next code fragment contains common configuration for all modules in our gradle build:
allprojects {
    apply plugin: "java"

    version = '1.0-SNAPSHOT'
    group = "info.rnowak.springBootFun"

    repositories {
        mavenLocal()
        mavenCentral()
    }

    dependencies {
        compile "org.springframework.boot:spring-boot-starter-test:1.0.1.RELEASE"
        compile "com.google.guava:guava:16.0.1"
        compile "com.h2database:h2:1.3.175"

        testCompile "junit:junit:4.11"
        testCompile "org.mockito:mockito-all:1.9.5"
        testCompile "org.assertj:assertj-core:1.5.0"
    }
}
Now when we have common configuration, we can declare basic modules of application:
project(":persistence") {
    dependencies {
        compile "org.springframework.boot:spring-boot-starter-data-jpa:1.0.1.RELEASE"

        testCompile project(":webapp")
    }
}

project(":webapp") {
    apply plugin: "spring-boot"

    dependencies {
        compile project(":persistence")
        compile "org.springframework.boot:spring-boot-starter-web:1.0.1.RELEASE"
    }
}
Most important parts are including special Spring Boot Starter packages and declaring usage of spring-boot plugin in one of subprojects.

Every starter packet contains dependencies for all necessary libraries used on given feature. For example, JPA starter has Hibernate dependencies and AOP starter contains spring-aop and AspectJ libraries. What is more, with this libraries Spring Boot provides also default configuration.

It is simple quick start configuration but it is enough for some starter applications.

Let's start fun with Spring!

Our next step should be creating of starting point of application. With Spring Boot it can be done by writing regular main method in some class. Now you only need to annotate this class with special Spring Boot auto configuration annotations and application is ready to run! Example of start class is shown below:
package info.rnowak.springFun;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.context.annotation.ComponentScan;

@ComponentScan
@EnableAutoConfiguration
public class SpringFun {
    public static void main(String[] args) {
        SpringApplication app = new SpringApplication(SpringFun.class);
        app.setShowBanner(false);
        app.run(args);
    }
}
Well, this step look simple but it has few interesting implications for all application.

Firstly, this class enables component scan for Spring managed beans with root package info.rnowak.springFun because it is placed in this package.

Another thing is that this main method allows to run application using command gradle run. By default it uses embedded Tomcat running on port 8080. Of course this behaviour can be changed and it is very well described in project documentation. It is also possible to create runnable jar from our application.

With main class defined we can create all other classes in our application like controllers, repositories, domain classes or services. But I won't show exact examples of such classes because they do not differ in any way from the same classes in old classic Spring. If you are interesed in my example, please take a look at the repository Spring Boot Fun repo.

Add some AngularJS

One of another "side effect" of Spring Boot main configuration class is that we get few default view resolvers. View resolver, in short version, is Spring feature, which maps names of view to specific view files.

Spring Boot with its default configuration sets lookup path for index.html file which will be served by default controller. Framework looks for this file in public/, webapp/ or resources/ directory on classpath. So you can just put index.html file in one of these locations and Spring Boot will create controller serving this view. And this is the way we can use AngularJS in our project. Of course it's not the only way but it is the simplest method for using AngularJS with Spring Boot application.

In our example application index.html file was placed in webapp/ directory and it looks like this:
<!DOCTYPE html>

<html ng-app="springFun">

<head>
    <link rel="stylesheet" href="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">

    <script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.0/jquery.min.js"></script>
    <script src="//netdna.bootstrapcdn.com/bootstrap/3.1.1/js/bootstrap.min.js"></script>

    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0-beta.4/angular.min.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0-beta.4/angular-route.min.js"></script>
    <script src="js/application.js"></script>
    <script src="js/controllers.js"></script>
</head>

<body>

    <nav class="navbar navbar-default" role="navigation">
        <div class="container-fluid">
            <div class="navbar-header">
                <a class="navbar-brand" href="#/index">Spring Boot Fun</a>
            </div>
            <div class="collapse navbar-collapse">
                <ul class="nav navbar-nav">
                    <li><a href="#/list">Messages list</a></li>
                    <li><a href="#/about">About</a></li>
                </ul>
            </div>
        </div>
    </nav>

    <div ng-view></div>

    <footer class="text-center">
        Spring Boot Fun
    </footer>

</body>

</html>
This file includes all angular libraries used in project, controllers definition and main application module with routing defined.

The rest of files is available in repository mentioned earlier in post so I will not provide all listings here as it would be just waste of virtual space in post :)

Summary

As we can see, Spring Boot greatly decreases time needed to write and run simple Java web application. It reduces amount of XML configuration and provieds a lot of default values and conventions. But if we want to precisely set some settings, Spring Boot does not forbid it and programmer can manually set all the settings.

Also deploy of application is simplified because Spring Boot with gradle or maven plugin allows to run application in place with these tools. We can also create runnable jar that contains embedded Tomcat or Jetty. And if it is not desired by us, we can always use war plugin and create regular, traditional war and deploy it in classical way.

Spring Boot has also great documentation and I strongly encourage to read it by everybody interested in this tool: Spring Boot Docs

Tuesday, November 5, 2013

Log4j and MDC in Grails

Log4j provides very useful feature: MDC - mapped diagnostic context. It can be used to store data in context of current thread. It may sound scary a bit but idea is simple.

My post is based on post http://burtbeckwith.com/blog/?p=521 from Burt Beckwith's excellent blog, it's definitely worth checking if you are interested in Grails.

Short background story...


Suppose we want to do logging our brand new shopping system and we want to have in each log customer's shopping basket number. And our system can be used at once by many users who can perform many transactions, actions like adding items and so on. How can we achieve that? Of course we can add basket number in every place where we do some logging but this task would be boring and error-prone. 

Instead of this we can use MDC to store variable with basket number in map. 

In fact MDC can be treated as map of custom values for current thread that can be used by logger. 


How to do that with Grails?


Using MDC with Grails is quite simple. All we need to do is to create our own custom filter which works for given urls and puts our data in MDC.

Filters in Grails are classes in directory grails-app/conf/* which names end with *Filters.groovy postfix. We can create this class manually or use Grails command: 
grails create-filters info.rnowak.App.Basket

In result class named BasketFilters will be created in grails-app/conf/info/rnowak/UberApp.

Initially filter class looks a little bit empty:
class BasketFilters {
    def filters = {
        all(controller:'*', action:'*') {
            before = {

            }
            after = { Map model ->

            }
            afterView = { Exception e ->

            }
        }
    }
}
All we need to do is fill empty closures, modify filter properties and put some data into MDC.

all is the general name of our filter, as class BasketFilters (plural!) can contain many various filters. You can name it whatever you want, for this post let assume it will be named basketFilter

Another thing is change of filter parameters. According to official documentation (link) we can customize our filter in many ways. You can specify controller to be filtered, its actions, filtered urls and so on. In our example you can stay with default option where filter is applied to every action of every controller. If you are interested in filtering only some urls, use uri parameter with expression describing desired urls to be filtered.

Three closures that are already defined in template have their function and they are started in these conditions:

  • before - as name says, it is executed before filtered action takes place
  • after - similarly, it is called after the action
  • afterView - called after rendering of the actions view
Ok, so now we know what are these mysterious methods and when they are called. But what can be done within them? In official Grails docs (link again) under section 7.6.3 there is a list of properties that are available to use in filter.

With that knowledge, we can proceed to implementing filter.

Putting something into MDC in filter


What we want to do is quite easy: we want to retrieve basket number from parameters and put it into MDC in our filter:
class BasketFilters {
    def filters = {
        basketFilter(controller:'*', action:'*') {
            before = {
                MDC.put("basketNumber", params.basketNumber ?: "")
            }
            after = { Map model ->
                MDC.remove("basketNumber")
            }
        }
    }
}

We retrieve basket number from Grails params map and then we put in map under specified key ("basketNumber" in this case), which will be later used in logger conversion pattern. It is important to remove custom value after processing of action to avoid leaks.

So we are putting something into MDC. But how make use of it in logs?


We can refer to custom data in MDC in conversion patter using syntax: %X{key}, where key is our key we used in filter to put data, like:
def conversionPattern = "%d{yyyy-MM-dd HH:mm:ss} %-5p %t [%c{1}] %X{basketNumber} - %m%n"


And that's it :) We've put custom data in log4j MDC and successfully used it in logs to display interesting values.

Tuesday, September 17, 2013

Grails with Spock unit test + IntelliJ IDEA = No thread-bound request found

During my work with Grails project using Spock test in IntelliJ IDEA I've encountered this error:

java.lang.IllegalStateException: No thread-bound request found: Are you referring to request attributes outside of an actual web request, or processing a request outside of the originally receiving thread? If you are actually operating within a web request and still receive this message, your code is probably running outside of DispatcherServlet/DispatcherPortlet: In this case, use RequestContextListener or RequestContextFilter to expose the current request.
 at org.springframework.web.context.request.RequestContextHolder.currentRequestAttributes(RequestContextHolder.java:131)
 at org.codehaus.groovy.grails.plugins.web.api.CommonWebApi.currentRequestAttributes(CommonWebApi.java:205)
 at org.codehaus.groovy.grails.plugins.web.api.CommonWebApi.getParams(CommonWebApi.java:65)
... // and few more lines of stacktrace ;)

It occurred when I tried to debug one of test from IDEA level. What is interesting, this error does not happen when I'm running all test using grails test-app for instance.

So what was the issue? With little of reading and tip from Tomek Kalkosiński (http://refaktor.blogspot.com/) it turned out that our test was missing @TestFor annotation and adding it solved all problems.

This annotation, according to Grails docs (link), indicates Spock what class is being tested and implicitly creates field with given type in test class. It is somehow strange as problematic test had explicitly and "manually" created field with proper controller type. Maybe there is a problem with mocking servlet requests?

Saturday, September 7, 2013

Spock basics

Spock (homepage) is like its authors say 'testing and specification framework'. Spock combines very elegant and natural syntax with the powerful capabilities. And what is most important it is easy to use.

One note at the very beginning: I assume that you are already familiar with principles of Test Driven Development and you know how to use testing framework like for example JUnit.

So how can I start?


Writing spock specifications is very easy. We need basic configuration of Spock and Groovy dependencies (if you are using mavenized project with Eclipse look to my previous post: Spock, Java and Maven). Once we have everything set up and running smooth we can write our first specs (spec or specification is equivalent for test class in other frameworks like JUnit of TestNG).

What is great with Spock is fact that we can use it to test both Groovy projects and pure Java projects or even mixed projects.


Let's go!


Every spec class must inherit from spock.lang.Specification class. Only then test runner will recognize it as test class and start tests. We will write few specs for this simple class: User class and few tests not connected with this particular class.

We start with defining our class:
import spock.lang.*

class UserSpec extends Specification {

}
Now we can proceed to defining test fixtures and test methods.

All activites we want to perform before each test method, are to be put in def setup() {...} method and everything we want to be run after each test should be put in def cleanup() {...} method (they are equivalents for JUnit methods with @Before and @After annotations).

It can look like this:
class UserSpec extends Specification {
    User user
    Document document

    def setup() {
        user = new User()
        document = DocumentTestFactory.createDocumentWithTitle("doc1")
    }

    def cleanup() {

    }
}
Of course we can use field initialization for instantiating test objects:
class UserSpec extends Specification {
    User user = new User()
    Document document = DocumentTestFactory.createDocumentWithTitle("doc1")

    def setup() {

    }

    def cleanup() {

    }
}

What is more readable or preferred? It is just a matter of taste because according to Spock docs behaviour is the same in these two cases.

It is worth mentioning that JUnit @BeforeClass/@AfterClass are also present in Spock as def setupSpec() {...} and def cleanupSpec() {...}. They will be runned before first test and after last test method.


First tests


In Spock every method in specification class, expect setup/cleanup, is treated by runner as a test method (unless you annotate it with @Ignore).

Very interesting feature of Spock and Groovy is ability to name methods with full sentences just like regular strings:
class UserSpec extends Specification {
    // ...

    def "should assign coment to user"() {
        // ...
    }
}
With such naming convention we can write real specification and include details about specified behaviour in method name, what is very convenient when reading test reports and analyzing errors.

Test method (also called feature method) is logically divided into few blocks, each with its own purpose. Blocks are defined like labels in Java (but they are transformed with Groovy AST transform features) and some of them must be put in code in specific order.

Most basic and common schema for Spock test is:
class UserSpec extends Specification {
    // ...

    def "should assign coment to user"() {
        given:
            // do initialization of test objects
        when:
            // perform actions to be tested
        then:
            // collect and analyze results
    }
}

But there are more blocks like:
  • setup
  • expect
  • where
  • cleanup
In next section I am going to describe each block shortly with little examples.

given block

This block is used to setup test objects and their state. It has to be first block in test and cannot be repeated. Below is little example how can it be used:
class UserSpec extends Specification {
    // ...
    
    def "should add project to user and mark user as project's owner"() {
        given:
            User user = new User()
            Project project = ProjectTestFactory.createProjectWithName("simple project")
        // ...
    }
}

In this code given block contains initialization of test objects and nothing more. We create simple user without any specified attributes and project with given name. In case when some of these objects could be reused in more feature methods, it could be worth putting initialization in setup method.

when and then blocks

When block contains action we want to test (Spock documentation calls it 'stimulus'). This block always occurs in pair with then block, where we are verifying response for satisfying certain conditions. Assume we have this simple test case:
class UserSpec extends Specification {
    // ...
    
    def "should assign user to comment when adding comment to user"() {
        given:
            User user = new User()
            Comment comment = new Comment()
        when:
            user.addComment(comment)
        then:
            comment.getUserWhoCreatedComment().equals(user)
    }

    // ...
}

In when block there is a call of tested method and nothing more. After we are sure our action was performed, we can check for desired conditions in then block.

Then block is very well structured and its every line is treated by Spock as boolean statement. That means, Spock expects that we write instructions containing comparisons and expressions returning true or false, so we can create then block with such statements:
user.getName() == "John"
user.getAge() == 40
!user.isEnabled()
Each of lines will be treated as single assertion and will be evaluated by Spock.

Sometimes we expect that our method throws an exception under given circumstances. We can write test for it with use of thrown method:
class CommentSpec extends Specification {
    def "should throw exception when adding null document to comment"() {
        given:
            Comment comment = new Comment()
        when:
            comment.setCommentedDocument(null)
        then:
            thrown(RuntimeException)
    }
}

In this test we want to make sure that passing incorrect parameters is correctly handled by tested method and that method throws an exception in response. In case you want to be certain that method does not throw particular exception, simply use notThrown method.


expect block

Expect block is primarily used when we do not want to separate when and then blocks because it is unnatural. It is especially useful for simple test (and according to TDD rules all test should be simple and short) with only one condition to check, like in this example (it is simple but should show the idea):
def "should create user with given name"() {
    given:
        User user = UserTestFactory.createUser("john doe")
    expect:
        user.getName() == "john doe"
}



More blocks!


That were very simple tests with standard Spock test layout and canonical divide into given/when/then parts. But Spock offers more possibilities in writing tests and provides more blocks.


setup/cleanup blocks

These two blocks have the very same functionality as the def setup and def cleanup methods in specification. They allow to perform some actions before test and after test. But unlike these methods (which are shared between all tests) blocks work only in methods they are defined in. 


where - easy way to create readable parameterized tests

Very often when we create unit tests there is a need to "feed" them with sample data to test various cases and border values. With Spock this task is very easy and straighforward. To provide test data to feature method, we need to use where block. Let's take a look at little the piece of code:

def "should successfully validate emails with valid syntax"() {
    expect:
        emailValidator.validate(email) == true
    where:
        email << [ "test@test.com", "foo@bar.com" ]
}

In this example, Spock creates variable called email which is used when calling method being tested. Internally feature method is called once, but framework iterates over given values and calls expect/when block as many times as there are values (however, if we use @Unroll annotation Spock can create separate run for each of given values, more about it in one of next examples).

Now, lets assume that we want our feature method to test both successful and failure validations. To achieve that goal we can create few 
parameterized variables for both input parameter and expected result. Here is a little example:

def "should perform validation of email addresses"() {
    expect:
        emailValidator.validate(email) == result
    where:
        email << [ "WTF", "@domain", "foo@bar.com" "a@test" 
        result << [ false, false, true, false ]
}
Well, it looks nice, but Spock can do much better. It offers tabular format of defining parameters for test what is much more readable and natural. Lets take a look:
def "should perform validation of email addresses"() {
    expect:
        emailValidator.validate(email) == result
    where:
        email           | result
        "WTF"           | false
        "@domain"       | false
        "foo@bar.com"   | true
        "a@test"        | false
}
In this code, each column of our "table" is treated as a separate variable and rows are values for subsequent test iterations.

Another useful feature of Spock during parameterizing test is its ability to "unroll" each parameterized test. Feature method from previous example could be defined as (the body stays the same, so I do not repeat it):
@Unroll("should validate email #email")
def "should perform validation of email addresses"() {
    // ...
}
With that annotation, Spock generate few methods each with its own name and run them separately. We can use symbols from where blocks in @Unroll argument by preceding it with '#' sign what is a signal to Spock to use it in generated method name.


What next?


Well, that was just quick and short journey  through Spock and its capabilities. However, with that basic tutorial you are ready to write many unit tests. In one of my future posts I am going to describe more features of Spock focusing especially on its mocking abilities.

Saturday, August 10, 2013

Integration tests with Maven and JUnit

There is no doubt that integration tests phase is crucial in modern applications development. We need to test behaviour of our subsystems and how they interact with other modules.

Using JUnit and Maven it's quite easy to create integration tests and run them in separate phase than unit test. It is very important, because integration tests tend to take much more time than unit ones because they work with database, network connections, other subsystems etc. Therefore, we want to run them more rarely.

With JUnit in version >= 4.8 there are two approaches for creating and running integration test:
  • using naming conventions and specifying separate executions for maven-surefire plugin
  • create marking interface and mark integration tests with @Category annotation and run test from failsafe-plugin (although it is possible to use surefire in both cases)

Separate executions


First method needs naming convention like naming all unit tests with "..Test.java" postfix (or "..Spec.groovy" ;) and integration tests with "..IntegrationTest.java". Then we need to change maven surefire configuration:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.15</version>
    <configuration>
        <skip>true</skip>    
    </configuration>
</plugin>
What I did here is forcing maven to skip default test phase. Instead of that, I will configure two separate executions (just below the <configuration> section):
<executions>
    <execution>
        <id>unit-tests</id>
        <phase>test</phase>
        <goals>
            <goal>test</goal>
        </goals>
        <configuration>
            <skip>false</skip>
            <includes>
                <include>**/*Test.class</include>
                <include>**/*Spec.class</include>
            </includes>
            <excludes>
                <exclude>**/*IntegrationTest.class</exclude>
            </excludes>
        </configuration>
    </execution>
    <execution>
        <id>integration-tests</id>
        <phase>integration-test</phase>
        <goals>
            <goal>test</goal>
        </goals>
        <configuration>
            <skip>false</skip>
            <includes>
                <include>**/*IntegrationTest.class</include>
            </includes>
        </configuration>
    </execution>
</executions>
In unit test execution I include all test that match naming convention for unit tests (both JUnit and spock ones) and exclude files matching integration test pattern and in integration test execution I did something opposite ;)


Annotations

Another method requires defining of marking interface like this:

package info.rnowak.webtex.common.test;

public interface IntegrationTest {

}
Then we can mark our integration test class with:
@Category(IntegrationTest.class)
Next thing is changing of surefire plugin configuration to omit integration test:
<plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-surefire-plugin</artifactId>
    <version>2.15</version>
    <configuration>
        <includes>
            <include>**/*Test.class</include>
            <include>**/*Spec.class</include>
        </includes>  
        <excludedGroups>info.rnowak.webtex.common.test.IntegrationTest</excludedGroups> 
    </configuration>
</plugin>
What has changed here is new <excludedGroups> tag with name of interface which marks our integration tests.
Next, we need to add and configure maven-failsafe plugin in order to run test from out integration test group:
<plugin><plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-failsafe-plugin</artifactId>
    <version>2.15</version>
    <executions>
        <execution>
            <goals>
                <goal>integration-test</goal>
            </goals>
            <configuration>
                <groups>info.rnowak.webtex.common.test.IntegrationTest</groups>
                <includes>
                    <include>**/*.class</include>
                </includes>
            </configuration>
        </execution>
    </executions>
</plugin>
With this configuration failsafe will run only test marked with @Category(IntegrationTest.class) annotation, no matter what their names are.


What is better?


Well, in my opinion it's just a matter of taste and style. Annotating each integration class may be a little cumbersome but we are not limited to naming classes within specified convention. On the other hand, unit test and integration test usually are named with some convention, so annotations are not a big deal.

Unable to instantiate default tuplizer

I wrote few hbm mappings for domain classes in my recent project, and I got exception like that:
org.hibernate.HibernateException: Unable to instantiate default tuplizer [org.hibernate.tuple.entity.PojoEntityTuplizer]
Of course my first thought was googling for it and I found interesting answers. Most commons causes of this exception are:
  • missing getters or setters, what's more, even a typo or wrong letter case (like getParentproject instead of getParentProject when field in class and mapping file is defined as parentProject)
  • missing default constructor
  • missing dependency for javassist library

My files seemed to be correctly defined, so it had to be missing dependency.
To fix it I've added these lines to my pom.xml:
<dependency>
    <groupId>org.javassist</groupId>
    <artifactId>javassist</artifactId>
    <version>3.18.0-GA</version>
</dependency>

Well, it shouldn't be a surprise because in full stacktrafe from this error there is an entry:
java.lang.ClassNotFoundException: javassist.util.proxy.MethodFilter
What explicitly indicates where is the root of this problem ;)

(And BTW: in my recent project I'm stuck with quite old version of Hibernate - 3.6.3)