Posts Tagged ‘tdd’

Test REST Services

Friday, September 12th, 2008

In my reply to a post on Tim Bray’s blog about using RSpec for testing REST services, I briefly described a project I’m working on, based on the work I’ve been doing at Gliffy, which is a testing framework for REST services called, unsurprisingly, RestUNIT.

For Gliffy’s REST-based integration API, I needed a way to test it, and hand-coding test cases using HTTPClient was just not going to cut it. Further, requests to Gliffy’s API require signing (similar to how Flickr does it), and our API was going to support multiple ways of specifying the representation type as well as tunneling over POST.

So, it occured to me that there was a lazier way of doing this testing. All I really needed to specify was the relative URL, parameters, headers, method, and expected response. Someone else could do the signing and re-run the tests with the various options (such as specifying the MIME Type via the Accept: header, and then again via a file extension in the URL).

I ended up creating a bunch of text files with this information. I then used a Ruby script to generate two things: an XML file that could be deserialized into a java object useful for testing, and a PHP script to test our PHP client API.

The Ruby script would also do things like calculate the signature (the test text files contained the api and secret keys a Gliffy user would have to use the API) and generate some derivative tests (e.g. one using a DELETE, and another tunneling that over POST). The testing engine could generate some additional derivative tests (e.g. GET requests should respond to conditional gets if the server sent back an ETag or Last-Modified header). All this then runs as a TestNG test.

The whole thing works well, but is pretty hackish. So, RestUNIT was created as a fresh codebase to create a more stable and useful testing engine. My hope is to specify tests as YAML or some other human-readable markup, instead of XML (which is essentially binary for any real-sized data) and to allow for more sophisticated means of comparing results, deriving tests, and running outside a container (all the Gliffy tests require a specific data set and run in-container).

The test specification format should then be usable to generate tests in any other language (like I did with PHP). I’m working on this slowly in my spare time and trying to keep the code clean and the architecture extensible, but not overly complex.

Didn’t do Test-Driven Design? Record your test cases later

Monday, September 8th, 2008

Following on from my post on Gliffy’s blog…

On more than a few occasions, I’ve been faced with making significant refactorings to an existing application. These are things where we need to overhaul an architectural component without breaking anything, or changing the application’s features. For an applicaiton without any test cases, this is not only scary, but ill-advised.

I believe this is the primary reason that development shops hang on to out-dated technology. I got a job at a web development shop after four years of doing nothing but Swing and J2EE. My last experience with Java web development was Servlets, JSPs and taglibs. This company was still using these as the primary components of their architecture. No Struts, no Spring, no SEAM. Why? One reason was that they had no test infrastructure and therefore not ability to refactor anything.

Doing it anyway

Nevertheless, sometimes the benefits outweigh the costs and you really need to make a change. At Gliffy, I was hired to create an API to integrate editing Gliffy diagrams into the workflow of other applications. After a review of their code and architecture, the principals and I decided that the database layer needed an overhaul. It was using JDBC/SQL and had become difficult to change (especially to the new guy: me). I suggested moving to the Java Persistence Architecture (backed by Hibernate), and they agreed. Only problem was how to make sure I didn’t break anything. They didn’t have automated tests, and I was totally new to the application environment.

They did have test scripts for testers to follow that would hit various parts of the application. Coming from my previous enviornment, that in and of itself was amazing. Since the application communicates with the server entirely via HTTP POST, and recieves mostly XML back, I figured I could manually execute the tests and record them in a way so they could be played back later as regression tests.

Recording Tests

This is suprisingly easy thanks to the filtering features of the Servlet specification:


  recorder
  com.gliffy.test.online.RecordServletFilter





  recorder
  /*

The filter code is bit more complex, because I had to create proxy classes for HttpServletRequest and HttpServletResponse. Here’s an overview of how everything fits together:

The request proxy had to read everything from the requests input stream, save it, and send a new stream that would output the same data to the caller. It had to do the same thing with the Reader. I’m sure it’s an error to use both in the same request, and Gliffy’s code didn’t do that, so this worked well.

private class RecordingServletRequest extends javax.servlet.http.HttpServletRequestWrapper
{
    BufferedReader reader = null;
    ServletInputStream inputStream = null;

    String readerContent = null;
    byte inputStreamContent[] = null;

    public RecordingServletRequest(HttpServletRequest r) { super(r); }

    public BufferedReader getReader()
        throws IOException
    {
        if (reader == null)
        {
            StringWriter writer = new StringWriter();
            BufferedReader superReader = super.getReader();
            int ch = superReader.read();
            while (ch != -1)
            {
                writer.write(ch);
                ch = superReader.read();
            }
            readerContent = writer.toString();
            return new BufferedReader(new StringReader(readerContent));
        }
        return reader;
    }

    public ServletInputStream getInputStream()
        throws IOException
    {
        if (inputStream == null)
        {
            ByteArrayOutputStream os = new ByteArrayOutputStream();
            ServletInputStream superInputStream = super.getInputStream();
            int b = superInputStream.read();
            while (b != -1)
            {
                os.write(b);
                b = superInputStream.read();
            }
            inputStreamContent = os.toByteArray();
            inputStream = new ByteArrayServletInputStream(inputStreamContent);
        }
        return inputStream;
    }
}

The response recorder was a bit trickier, because I needed to save things like status codes and content types. This implementation probably wouldn’t work for all clients (for example, it ignores any response headers), but since Gliffy is an OpenLaszlo app, and OpenLaszlo has almost no view into HTTP, this worked well for our purposes. Again, I had to wrap the OutputStream/Writer so I could record what was being sent back.

    private class RecordingServletResponse extends HttpServletResponseWrapper
{
    public RecordingServletResponse(HttpServletResponse r)
    {
        super(r);
    }

    int statusCode;
    StringWriter stringWriter = null;
    ByteArrayOutputStream byteOutputStream = null;
    String contentType = null;

    private PrintWriter writer = null;
    private ServletOutputStream outputStream = null;

    public ServletOutputStream getOutputStream()
        throws IOException
    {
        if (outputStream == null)
        {
            byteOutputStream = new ByteArrayOutputStream();
            outputStream = new RecordingServletOutputStream(super.getOutputStream(),new PrintStream(byteOutputStream));
        }
        return outputStream;
    }

    public PrintWriter getWriter()
        throws IOException
    {
        if (writer == null)
        {
            stringWriter = new StringWriter();
            writer = new RecordingPrintWriter(super.getWriter(),new PrintWriter(stringWriter));
        }
        return writer;
    }

    public void sendError(int sc)
        throws IOException
    {
        statusCode = sc;
        super.sendError(sc);
    }

    public void sendError(int sc, String msg)
        throws IOException
    {
        statusCode = sc;
        super.sendError(sc,msg);
    }

    public void setStatus(int sc)
    {
        statusCode = sc;
        super.setStatus(sc);
    }

    public void setContentType(String type)
    {
        contentType = type;
        super.setContentType(type);
    }
}

The filter then needs to use this and inject them into the actual servlet calls:

public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
    throws IOException, ServletException
{
    RecordingServletRequest recordingRequest =
      new RecordingServletRequest((HttpServletRequest)request);
    RecordingServletResponse recordingResponse =
      new RecordingServletResponse((HttpServletResponse)response);

    chain.doFilter(recordingRequest,recordingResponse);

After the call to doFilter, we can then examine the proxy request/respons and record the test. I’ll spare you 20 lines of setXXX methods. I created a Java bean class and used XStream to serialize it. I then created another class that runs as a TestNG test to deserialize these files and make the same requests. I record the response and see if it matches.

Running the Tests

There were a few problems with this approach:

  • The tests required certain test data to exist
  • Each test potentially modifies the database, meaning the tests have to be run in the order they were created.
  • The test results had temporal data in them that, while irrelevant to the tests “passing”, complicated exact-match comparisions of results

TestNG (and JUnit) are not really designed for this; they are more for proper unit testing, where each test can be run indepedent of the others and the results compared. While there are facilities for setting up test data and cleaning up, the idea of resetting the database before each of the 300 tests I would record was not appealing. Faking/mocking the database was not an option; I was creating these tests specifically to make sure my changes to the database layer were not causing regressions. I needed to test against a real database.

I ultimately decided to group my tests into logical areas, and ensure that: a) tests were run in a predictable order, and b) the first test of a group was run against a known dataset. I created a small, but useful, test dataset and created a TestNG test that would do both (a) and (b). It wasn’t pretty, but it worked. This clearly isn’t the way a unit test framework should be used, and I would call these sorts of tests functional, rather than unit. But, since our CI system requires JUnit test results as output, and the JUnit format isn’t documented, might as well use TestNG to handle it for me.

The last problem was making accurate comparisons of results. I did not want to have to parse the XML returned by the server. I settled on some regular expressions that stripped out temporal and transient data not relevant to the test. Both the expected and received content were run through this regexp filter and those results were compared. Parsing the XML might result in better failure messages (right now I have to do a visual diff, which is a pain), but I wasn’t convinced that the existing XML diff tools were that useful.

Results

Overall, it worked out great. I was able to completely overhaul the database layer, and the Gliffy client was none the wiser. We were even able to use these tests to remove our dependence on Struts, simplifying the application’s deployment (we weren’t using many features of Struts anyway). The final validation of these tests actually came recently, when we realized a join table needed to be exposed to our server-side code. This was a major change in two key data container, and the recorded tests were crucial to finding bugs this introduced.

So, if you don’t have the luxury of automated tests, you can always create them. I did a similar thing with EJB3 using the Interceptors concept.