Wednesday, August 31, 2011


I'm fond of clean coding. I can't imagine programming on a way that leaves mess and garbage behind. If you do so, then you fail to take enough care over the code you're responsible for.

You always wash the dishes after you used them and most likely you always iron and fold your clothes before you put them into the closet. You don't signal to the left when you turn right, and you don't go through the red light and stop at the green light. You start the sentences with capital letter and end them with the proper punctuation mark.

Some of these examples are only habits, others are forced by some law, yet another is only a matter of following some rules. What they have in common is that they're only conventions and as such they're are all breakable. What happens if you intentionally break these habits, laws and rules? Hopefully nothing serious, and everything goes on normally. On the other hand... You can put your crumpled clothes into the closet but when you wear them you'll look austere. You can cross the red light but you endanger your and other's safety. You can start a sentence with lower case letter but other's may think you're uneducated. You don't want, of course, any of these to happen, so you don't do these things. You simply follow the rules and habits because you know that's the way to do it.

The same principles are true when it comes to clean coding. You don't want others to clean up your mess. You format your code, so it's nice to read. You don't call your method add when it does subtraction. You don't call when Iterator.hasNext() returns false. And you don't start a class name with lower case letter, or a method name with capital letter. Why? It's obvious. Because only applying the industry standards ensures that you don't make your colleagues' life harder when they have to read, understand and maintain the code you wrote.

For most of us it's common sense. It goes without saying or without thinking. We learned these conventions, principles and rules and use them all the time without making extra effort or wasting time. But unfortunately there's always a few exceptions whose concern towards the code ends as soon as it builds. The code they produce is syntactically and semantically correct, though, but the hair on the back of your neck stands when you have a look at it. When I see such a code I often wonder if the author of the code is as negligent in other aspects of his life as much he was when he wrote the code, or it's only his indolence that prevented him doing things on the right way...

Sunday, August 7, 2011

Wicket state

I heard a question from a colleague on Friday: In Wicket, what is a stateless form? That's easy to answer. It's a form that doesn't have any state. No kidding, really? But how do I know if my form has or has not a state? That's also easy to answer. If you create an instance of the class StatelessForm then your form does not maintain any state. Is it really as simple as that? Well, not exactly... First we need to understand what state means in Wicket.

Being a server-side web framework, Wicket must maintain the state of the web application for every user. When a user interacts with a Wicket web application and accesses a web page an instance of a Page subclass is stored in the memory. Pages visited previously by the user are serialized to the disk and can be loaded later should the user visit them again. But what if a page can have multiple states during a session? Does Wicket serialize every different state of the page to the disk? No, that's not the case. There's another state-handling mechanism, called page versioning, which maintains the current state of the page by logging the changes made to the page. Should the user work only forward only the current state of the pages are important, but should he go backward the previous states and versions become equally important. Both page serialization and versioning happen automatically and every default Wicket component logs the changes to their own state. Custom components on the other hand - like a subclass of the Page class with some declared fields - must implement their own logger by extending the Change class and registering a new instance of this class with the Component.addStateChange(Change) method every time when the page's state changes.

Everything mentioned above, however, is only true for stateful Wicket pages. If the page is stateless then there's no need to version it, to keep it in the session, or to serialize it to the disk, since stateless pages can be instantiated every time when they're needed. So what are the prerequisites of a stateless page? A page is considered stateless if it is bookmarkable and it contains only stateless components. This requires a little bit of explanation.

Bookmarkable page means that a URL can be assigned to the page and this URL does not contain any session-related information, so when the user clicks the link, a new instance of the Page is created. To make a page bookmarkable it must have either a default no-arg constructor, or a constructor that accepts a PageParameters argument, or both. Pages not having such constructors (or these constructors are not public) can only be instantiated from within another page for there's no way Wicket can figure it out what constructor to use with what arguments. When the user first visits a non-bookmarkable page, Wicket serializes it to the session because the only way to recreate the page without the constructors when the user returns to it is to deserialize it. For this reason, every page without the aforementioned two public constructors are considered stateful even if they don't maintain any real state.

Stateless components are components where the stateless hint says they're stateless and the hint of every behavior added to them says the behavior is stateless. Wicket operates in stateless mode by default (as long as the pages have at least one of the two constructors) because most of the Wicket components are stateless. Keep in mind though, that as soon as you add a Link or a Form to your page, or add an AJAX behavior to any of the components contained by the page, Wicket silently switches to stateful mode. If you want to keep your application stateless, you can always use the StatelessForm and StatelssLink components coming with Wicket, which are the same as Form and Link except their stateless hint says they're stateless.

In the first paragraph I asked the question if having a stateless form is really as simple as creating a new instance of the StatelessForm class. I just mentioned that a stateless form only hints that it's stateless but it's still up to you to make it sure that it does not rely on anything else but the data coming from the HTTP request. If the stateless form uses any field from the containing page then it isn't really stateless - unless the page is stateless. Using stateless forms in stateful pages is of course possible, but in my opinion it doesn't make any sense, since the page gets serialized to the session anyway, and stateless forms are meant to help keeping your application stateless.

Thursday, August 4, 2011

How to unit test Spring-based RESTful APIs

Even though the idea of the RESTful services is not new, REST APIs became popular only recently. There's not a single serious service without providing RESTful access to its resources. If there's any, then it's not serious...

There are many ways to implement a REST API: you can use different languages and can choose from multiple frameworks. Still being one of the most popular languages, my choice is Java with the Spring application framework. The annotation-based Spring controllers provide an elegant way to map the HTTP requests to services and then return a formatted response.

I had a problem, however, during the implementation. I love unit tests. I don't release software without testing it to its last bits. And those tests must be automatic, repeatable, fast, and natural to read. The only responsibilities of the controllers should be to map the HTTP requests to method calls and to translate the results into HTTP responses, but I couldn't find a way to test this most essential part of the REST API. At least not by googling for "how to unit test Spring controller request mapping". My research was not extensive, though, but none of the solutions and suggestions I found was to my liking. They were either not automatic, because a servlet container had to be started manually and the curl or wget commands were used later to try it out. Others required a lot of boilerplate code, for those started up an embedded Jetty, configured every HTTP request and then evaluated the HTTP responses. Yet another was just simply not easy enough to digest when I read it, because it used annotation-based test-configuration and relied on the ugly side effects of certain method calls.

After almost an hour of frustration I decided it is easier to think a bit harder than to find a solution I'd like, so I came up with my own that I'm going to share now. First we need a service with a REST API. For the sake of simplicity it'll be a feedback server where someone can leave a positive or a negative feedback - something similar to Facebook's Like and the missing Dislike buttons. An optional message can also be posted along with the feedback. Once a feedback is given, it can be viewed. I'll only show the relevant parts, but if you're interested, you can download the full source code from here. You won't find, however, a fancy client application with actual Like and Dislike buttons. Only the RESTful service is implemented which returns plain text or JSON to the caller. Here we go...

 public class FeedbackController {

     private final FeedbackService feedbackService;

     public FeedbackController(FeedbackService feedbackService) {
         Validate.notNull(feedbackService, "feedbackService is required");

         this.feedbackService = feedbackService;

     @RequestMapping(value = "/thumbsup", method = {RequestMethod.POST})
     public HttpEntity<String> saveThumbsUpFeedback(@RequestParam(value = "message", required = false) String message) {

         return new HttpEntity<String>("Thank you for your feedback", createHeader(TEXT_PLAIN));

     @RequestMapping(value = "/thumbsdown", method = {RequestMethod.POST})
     public HttpEntity<String> saveThumbsDownFeedback(@RequestParam(value = "message", required = false) String message) {

         return new HttpEntity<String>("Thank you for your feedback", createHeader(TEXT_PLAIN));

     @RequestMapping(value = "/list", method = {RequestMethod.GET})
     public HttpEntity<String> listFeedbacks() {
         Gson gson = new GsonBuilder().setDateFormat("H:m:s dd:MM:yyyy").create();

         return new HttpEntity<String>(gson.toJson(feedbackService.listFeedbacks()), createHeader(APPLICATION_JSON));

     private HttpHeaders createHeader(MediaType mediaType) {
         HttpHeaders httpHeaders = new HttpHeaders();

         return httpHeaders;

     public void handleRequestExceptions(HttpServletRequest request, HttpServletResponse response) throws IOException {
         response.sendError(SC_METHOD_NOT_ALLOWED, "Request method '" + request.getMethod()
         + "' is not supported on " + request.getRequestURI());

     public void handleExceptions(Exception e, HttpServletResponse response) throws IOException {
         response.sendError(SC_INTERNAL_SERVER_ERROR, "An internal error occurred.");

As you can see, the controller above is indeed quite simple:
  • when a POST request comes in to /thumbsup or /thumbsdown it instructs the feedback service to save the feedback
  • when a GET request comes in to /list it asks the feedback service to return every feedback and then converts them to JSON
  • if something goes wrong it handles the exceptions so that the client receives an HTTP status code along with a short message
I created this class with TDD in mind, so tests came first, then the implementation. The tests, however, were very useless: I didn't mock the FeedbackService class, so the tests only retested what was already tested. This was the moment when I started to look for solutions about testing the controller mappings. An hour later came the moment when I gave up and implemented what was necessary to test those nasty Spring annotations.

So what exactly do we need in our unit test? If you know Spring's Web MVC framework then you know that to make our controller work a DispatcherServlet has to be created in our web application's web.xml. This DispatcherServlet will then automatically take care of delegating the requests to their destinations. Simply put, we'll need a web application context and a DispatcherServlet instance in this context. Having configured everything, we can send mocked HTTP requests to the servlet which in turn will reply with mocked HTTP responses. Let's create our own test web application context with a DispatcherServlet:

 public class MockXmlWebApplicationContext extends XmlWebApplicationContext {

     public MockXmlWebApplicationContext(String webApplicationRootDirectory, String servletName, String... configLocations) throws ServletException {
         init(webApplicationRootDirectory, servletName, configLocations);

     private void init(String webApplicationRootDirectory, String servletName, String... configLocations) throws ServletException {
         MockServletContext servletContext = new MockServletContext(webApplicationRootDirectory, new FileSystemResourceLoader());
         MockServletConfig servletConfig = new MockServletConfig(servletContext, servletName);
         servletContext.setAttribute(WebApplicationContext.ROOT_WEB_APPLICATION_CONTEXT_ATTRIBUTE, this);

         DispatcherServlet dispatcherServlet = new DispatcherServlet();

         addBeanFactoryPostProcessor(new MockBeanFactoryPostProcessor(servletName, dispatcherServlet));
         addApplicationListener(new SourceFilteringListener(this, new ContextRefreshedEventListener(dispatcherServlet)));


     private final class ContextRefreshedEventListener implements ApplicationListener<ContextRefreshedEvent> {

         private final DispatcherServlet dispatcherServlet;

         private ContextRefreshedEventListener(DispatcherServlet dispatcherServlet) {
             this.dispatcherServlet = dispatcherServlet;

         public void onApplicationEvent(ContextRefreshedEvent event) {

     private final class MockBeanFactoryPostProcessor implements BeanFactoryPostProcessor {

         private final String servletName;
         private final DispatcherServlet dispatcherServlet;

         private MockBeanFactoryPostProcessor(String servletName, DispatcherServlet dispatcherServlet) {
             this.servletName = servletName;
             this.dispatcherServlet = dispatcherServlet;

         public void postProcessBeanFactory(ConfigurableListableBeanFactory beanFactory) throws BeansException {
             beanFactory.registerSingleton(servletName, dispatcherServlet);

What's happening in this class? First of all it extends the XmlWebApplicationContext class which is needed if we want to simulate a servlet environment. Later in the init(String, String, String...) method we set up the servlet context and the servlet configuration. These will be used by the DispatcherServlet to find it's resources. If otherwise not specified, an XmlWebApplicationContext would try to load the standard applicationContext.xml from the WEB-INF directory. The applicationContext.xml, however, is not always in this directory or one may choose a different name for it, so it's configurable with the String... configLocations constructor argument. The locations listed in this argument are then used to load all the necessary configurations.

A DispatcherServlet is instantiated but it is not yet registered in the web application context. We can't simply add new beans to an existing application context - normally the beans are loaded from the applicationContext.xml - but we can register a post processor (a BeanFactoryPostProcessor) which will take care of it when the application context gets refreshed. The DispatcherServlet can receive events when the application context is refreshed so it's a good idea to set up an event listener (an ApplicationListener). Since the dispatcher servlet is only interested in the ContextRefreshedEvent, we can filter out the other application events by wrapping our event listener into a SourceFilteringListener, which will only delegate events that the listener can handle. An optional step is to register a shut down hook on our web application context so that the owned resources are properly released when the JVM is shut down. Having configured it all only two more steps are required: the application context should be refreshed so that the whole configuration gets processed and to initialize the dispatcher servlet.

From now on we can use the new MockXmlWebApplicationContext in our unit test with the proper arguments: we need to pass in the root directory of our web application, the name of the dispatcher servlet, and all the applicationContext.xml files which are needed by our application to work. Here is an example unit test demonstrating the usage of the web application context implemented above. The unit test checks if a POST request was sent to /thumbsup then a positive feedback is saved, and also checks if the request method is not supported on the accessed resource, the correct HTTP status code is set in the response along with an error message.

 public class FeedbackControllerTest {

     private static ApplicationContext applicationContext;

     private DispatcherServlet subject;

     public static void setUpUnitTest() throws Exception {
         applicationContext = new MockXmlWebApplicationContext("src/main/webapp", "feedback-controller", "classpath:spring/applicationContext.xml");

     public void setUp() throws ServletException {
         subject = applicationContext.getBean(DispatcherServlet.class);

     public void shouldSavePositiveFeedbackWithoutMessage() throws Exception {
         MockHttpServletRequest request = new MockHttpServletRequest("POST", "/thumbsup");
         MockHttpServletResponse response = new MockHttpServletResponse();

         subject.service(request, response);

         assertThat(response.getStatus(), is(SC_OK));
         assertThat(response.getContentAsString(), is("Thank you for your feedback"));
         assertSavedFeedback(POSITIVE, null);

     public void shouldSetMethodNotAllowedStatusCodeIfPositiveFeedbackIsAccessedWithGet() throws Exception {
         MockHttpServletRequest request = new MockHttpServletRequest("GET", "/thumbsup");
         MockHttpServletResponse response = new MockHttpServletResponse();

         subject.service(request, response);

         assertThat(response.getStatus(), is(SC_METHOD_NOT_ALLOWED));
         assertThat(response.getErrorMessage(), is("Request method 'GET' is not supported on /thumbsup"));

The tests above meet all of my requirements. Not because I wrote them but because they satisfy the aforementioned criteria:
  • Being JUnit tests they can be automatically executed every time when the application is built. For the same reason they're also repeatable.
  • Other then loading the Spring configuration files by the MockXmlWebApplicationContext once in the beginning, the tests are executed very fast for they are not communicating with a real server.
  • The amount of boilerplate code is reduced to the short implementation of a web application context and to the straightforward configuration of this context. The web application context is reusable for every other controller, and there are no side effects utilized. No real server is started, no real requests are set up and sent to the server in every test, and no cumbersome parsing of HTTP responses are implemented, yet every bit of the controller is tested by simulating the communication between the client and the server.
  • The tests cases are short, descriptive, and are very natural to my eyes when I read them. I admit, this statement is subjective, but I still like that in only a few lines I set up an HTTP request, create an HTTP response object in which the result is expected, send the request to the dispatcher servlet, and then assert the response.

Tuesday, July 26, 2011

Agile estimations

I love agile software development because of the high quality software it produces in the shortest possible time, which also makes the stakeholders happy. By releasing frequently the development team gathers feedback almost instantly, so they can promptly verify if they're still on the right track or adjustments are necessary because of the changed requirements. However, the team can only release frequently if they work in short iterations and finish all of the work they've planned for that iteration. A key factor of avoiding failure and delivering as much business value as possible is the more or less accurate estimation of the tasks that the team is committed to get done.

But how much work a team can finish during an iteration? It depends on many things. Based on the previous iterations the team must have some sort of statistics (velocity) about their productivity which can be used as a reference to plan the new iteration. The availability of the team members is also important. And, of course, the team must estimate the effort it takes to get the planned job done.

Estimating is not easy though. If you start working on a project from scratch, you don't have much reference to rely on when you have to come up with some figures. It's even worse if you start working with a new team because you don't know how the team performs. But even if you work on the same project for months with the same team members, reaching an agreement can be difficult.

We had our planning meeting yesterday and we faced a different problem: I had the feeling that not all the team members shared the same understanding of the metric and scales we use to estimate the planned user stories. Being comfortable with the used metric is crucial, otherwise the planning meeting can turn into a never-ending argument. However, it isn't simple to shift your mind from time-based estimation into something more abstract.

It's not easy because at the end it all comes back to time: you have to finish something during the iteration. Your manager is also not interested in the amount of abstract things you've estimated your tasks in, but she will ask if you will finish your work, let's say, by the end of the week. Time is certainly not a negligible factor though, but estimating the user stories in the number of days it takes to implement them is just simply wrong. Why? Because you can only manage your time, but estimation is about the team effort. Implementing an algorithm that calculates the prime factors of a number can take a day for someone without mathematical background, but can be a ten-minute long finger exercise for someone practicing the Prime Factor Kata. What is the estimation in this case? One day or ten minutes? It depends on who picks up this task from the whiteboard.

We need to find something else then which does not depend on the skills of the team members as much as the time-based estimation. We could use complexity points to estimate our user stories. Complexity points - as their name suggests - reflect how complex a feature is, that is how difficult it is to implement it. Sounds good, doesn't it? The prime factor algorithm isn't going to be more complicated just because the team's junior programmer picks it up. Unfortunately, this won't work either. It won't work because complexity points completely ignore the effort which has to be taken to finish something. Typing in the text from a thousand-page long book is a fairly easy task, there isn't anything complicated about it. It most likely will, however, take the same amount of, or even more effort than to implement an algorithm that solves the Rubik cube.

At RIPE NCC we're doing Scrum, and we (when everybody has the same understanding) - like many others - estimate our coming work in story points. Story points represent the necessary effort to finish a user story. When you estimate in story points you take into account everything that is related to that user story: implementation, documentation, setting up the test environment, communication with other departments, and so forth. Story points are not independent from time or from complexity though. Implementing a complicated feature, like solving the Rubik cube, must be estimated to a higher story point value than implementing a simple Hello, World! application. Time-consuming yet simple tasks, like typing in one thousand pages, must also be estimated to a higher story point value. Nevertheless, the estimated story points must represent the team effort to get that particular user story done.

The number of delivered story points will change over time due to the inaccurate estimations in the beginning, the unforeseen events the team has to deal with, the different level of skills the team members have, and whatnot. As the team grows more mature and experienced, their estimation becomes more and more accurate, which ideally leads to a relatively constant velocity, which in turn aids them in even more accurate estimations...


I always wanted to start a blog that is somehow related to software development. Well, certainly not in my whole life but for a while now. What prevented me from doing so is that I tried to convince myself that I don't have the time for it, and - since there's nothing new under the sun - I believed that I couldn't write about anything useful anyway that has not yet been written by someone else.

Having an excuse such as "I don't have the time for it" is nothing else but laziness, for everybody should have the time for anything if they wanted to. Worrying about committing plagiarism by repeating what has already been told by someone is just simply foolish because there's already a huge amount of articles and blog entries out there about the very same topics, thus my blog is just yet another one.

By starting this blog I finally accomplished one of my goals, that is authoring a technical blog. I can't promise however that what I write here is something you've never heard before, neither can I be sure that it is always correct. But I don't even care! I just type in what's in my mind and I think it's worth mentioning... even if I'm only explaining the obvious.