Posts Tagged ‘jpa’

Didn’t do Test-Driven Design? Record your test cases later

Monday, September 8th, 2008

Following on from my post on Gliffy’s blog…

On more than a few occasions, I’ve been faced with making significant refactorings to an existing application. These are things where we need to overhaul an architectural component without breaking anything, or changing the application’s features. For an applicaiton without any test cases, this is not only scary, but ill-advised.

I believe this is the primary reason that development shops hang on to out-dated technology. I got a job at a web development shop after four years of doing nothing but Swing and J2EE. My last experience with Java web development was Servlets, JSPs and taglibs. This company was still using these as the primary components of their architecture. No Struts, no Spring, no SEAM. Why? One reason was that they had no test infrastructure and therefore not ability to refactor anything.

Doing it anyway

Nevertheless, sometimes the benefits outweigh the costs and you really need to make a change. At Gliffy, I was hired to create an API to integrate editing Gliffy diagrams into the workflow of other applications. After a review of their code and architecture, the principals and I decided that the database layer needed an overhaul. It was using JDBC/SQL and had become difficult to change (especially to the new guy: me). I suggested moving to the Java Persistence Architecture (backed by Hibernate), and they agreed. Only problem was how to make sure I didn’t break anything. They didn’t have automated tests, and I was totally new to the application environment.

They did have test scripts for testers to follow that would hit various parts of the application. Coming from my previous enviornment, that in and of itself was amazing. Since the application communicates with the server entirely via HTTP POST, and recieves mostly XML back, I figured I could manually execute the tests and record them in a way so they could be played back later as regression tests.

Recording Tests

This is suprisingly easy thanks to the filtering features of the Servlet specification:


  recorder
  com.gliffy.test.online.RecordServletFilter





  recorder
  /*

The filter code is bit more complex, because I had to create proxy classes for HttpServletRequest and HttpServletResponse. Here’s an overview of how everything fits together:

The request proxy had to read everything from the requests input stream, save it, and send a new stream that would output the same data to the caller. It had to do the same thing with the Reader. I’m sure it’s an error to use both in the same request, and Gliffy’s code didn’t do that, so this worked well.

private class RecordingServletRequest extends javax.servlet.http.HttpServletRequestWrapper
{
    BufferedReader reader = null;
    ServletInputStream inputStream = null;

    String readerContent = null;
    byte inputStreamContent[] = null;

    public RecordingServletRequest(HttpServletRequest r) { super(r); }

    public BufferedReader getReader()
        throws IOException
    {
        if (reader == null)
        {
            StringWriter writer = new StringWriter();
            BufferedReader superReader = super.getReader();
            int ch = superReader.read();
            while (ch != -1)
            {
                writer.write(ch);
                ch = superReader.read();
            }
            readerContent = writer.toString();
            return new BufferedReader(new StringReader(readerContent));
        }
        return reader;
    }

    public ServletInputStream getInputStream()
        throws IOException
    {
        if (inputStream == null)
        {
            ByteArrayOutputStream os = new ByteArrayOutputStream();
            ServletInputStream superInputStream = super.getInputStream();
            int b = superInputStream.read();
            while (b != -1)
            {
                os.write(b);
                b = superInputStream.read();
            }
            inputStreamContent = os.toByteArray();
            inputStream = new ByteArrayServletInputStream(inputStreamContent);
        }
        return inputStream;
    }
}

The response recorder was a bit trickier, because I needed to save things like status codes and content types. This implementation probably wouldn’t work for all clients (for example, it ignores any response headers), but since Gliffy is an OpenLaszlo app, and OpenLaszlo has almost no view into HTTP, this worked well for our purposes. Again, I had to wrap the OutputStream/Writer so I could record what was being sent back.

    private class RecordingServletResponse extends HttpServletResponseWrapper
{
    public RecordingServletResponse(HttpServletResponse r)
    {
        super(r);
    }

    int statusCode;
    StringWriter stringWriter = null;
    ByteArrayOutputStream byteOutputStream = null;
    String contentType = null;

    private PrintWriter writer = null;
    private ServletOutputStream outputStream = null;

    public ServletOutputStream getOutputStream()
        throws IOException
    {
        if (outputStream == null)
        {
            byteOutputStream = new ByteArrayOutputStream();
            outputStream = new RecordingServletOutputStream(super.getOutputStream(),new PrintStream(byteOutputStream));
        }
        return outputStream;
    }

    public PrintWriter getWriter()
        throws IOException
    {
        if (writer == null)
        {
            stringWriter = new StringWriter();
            writer = new RecordingPrintWriter(super.getWriter(),new PrintWriter(stringWriter));
        }
        return writer;
    }

    public void sendError(int sc)
        throws IOException
    {
        statusCode = sc;
        super.sendError(sc);
    }

    public void sendError(int sc, String msg)
        throws IOException
    {
        statusCode = sc;
        super.sendError(sc,msg);
    }

    public void setStatus(int sc)
    {
        statusCode = sc;
        super.setStatus(sc);
    }

    public void setContentType(String type)
    {
        contentType = type;
        super.setContentType(type);
    }
}

The filter then needs to use this and inject them into the actual servlet calls:

public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
    throws IOException, ServletException
{
    RecordingServletRequest recordingRequest =
      new RecordingServletRequest((HttpServletRequest)request);
    RecordingServletResponse recordingResponse =
      new RecordingServletResponse((HttpServletResponse)response);

    chain.doFilter(recordingRequest,recordingResponse);

After the call to doFilter, we can then examine the proxy request/respons and record the test. I’ll spare you 20 lines of setXXX methods. I created a Java bean class and used XStream to serialize it. I then created another class that runs as a TestNG test to deserialize these files and make the same requests. I record the response and see if it matches.

Running the Tests

There were a few problems with this approach:

  • The tests required certain test data to exist
  • Each test potentially modifies the database, meaning the tests have to be run in the order they were created.
  • The test results had temporal data in them that, while irrelevant to the tests “passing”, complicated exact-match comparisions of results

TestNG (and JUnit) are not really designed for this; they are more for proper unit testing, where each test can be run indepedent of the others and the results compared. While there are facilities for setting up test data and cleaning up, the idea of resetting the database before each of the 300 tests I would record was not appealing. Faking/mocking the database was not an option; I was creating these tests specifically to make sure my changes to the database layer were not causing regressions. I needed to test against a real database.

I ultimately decided to group my tests into logical areas, and ensure that: a) tests were run in a predictable order, and b) the first test of a group was run against a known dataset. I created a small, but useful, test dataset and created a TestNG test that would do both (a) and (b). It wasn’t pretty, but it worked. This clearly isn’t the way a unit test framework should be used, and I would call these sorts of tests functional, rather than unit. But, since our CI system requires JUnit test results as output, and the JUnit format isn’t documented, might as well use TestNG to handle it for me.

The last problem was making accurate comparisons of results. I did not want to have to parse the XML returned by the server. I settled on some regular expressions that stripped out temporal and transient data not relevant to the test. Both the expected and received content were run through this regexp filter and those results were compared. Parsing the XML might result in better failure messages (right now I have to do a visual diff, which is a pain), but I wasn’t convinced that the existing XML diff tools were that useful.

Results

Overall, it worked out great. I was able to completely overhaul the database layer, and the Gliffy client was none the wiser. We were even able to use these tests to remove our dependence on Struts, simplifying the application’s deployment (we weren’t using many features of Struts anyway). The final validation of these tests actually came recently, when we realized a join table needed to be exposed to our server-side code. This was a major change in two key data container, and the recorded tests were crucial to finding bugs this introduced.

So, if you don’t have the luxury of automated tests, you can always create them. I did a similar thing with EJB3 using the Interceptors concept.

Using ThreadLocal and Servlet Filters to cleanly access JPA an EntityManager

Wednesday, May 14th, 2008

My current project is slowly moving from JDBC-based database interaction to JPA-based. Following good sense, I’m trying to change things as little as possible. One of those things is that we are deploying under Tomcat and not under a full-blown J2EE container. This means that EJB3 is out. After my post regarding this configuration, I quickly realized that my code started to get littered with:

EntityManager em = null;
try
{
  em = EntityManagerUtil.getEntityManager();
  // do stuff with entity manager
}
finally
{
  try {
    if (em != null) em.close();
  } catch (Throwable t) {
    logger.error("While closing an EntityManager",t);
  }
}

Pretty ugly, and seriously annoying to have to add 13 lines of code to any method that needs to interact with the database. The Hibernate docs suggest using ThreadLocal variables to provide access to the EntityManager throughout the life of a request (which wouldn’t really work for a Swing app, but since this is servlet-based, it should work fine). The ThreadLocal javadocs contain possibly the most annoying example ever, and I didn’t follow how to use it.

Anyway, I finally got around to it, and also solved the close problem as well, by using a Servlet Filter. I guess this type of thing would normally be solvable by Spring or Guice, but I didn’t want to drag all of that into the application to refactor this one thing; I would’ve easily spent the rest of the day dealing with XML confihuration and deployment.

The solution was quite simple:

/** Provides access to the entity manager.  */
public class EntityManagerUtil
{
    public static final ThreadLocal<EntityManager>
        ENTITY_MANAGERS = new ThreadLocal<EntityManager>();

    /** Returns a fresh EntityManager */
    public static EntityManager getEntityManager()
    {
        return ENTITY_MANAGERS.get();
    }
}
public class EntityManagerFilter implements Filter
{
    private Logger itsLogger = Logger.getLogger(getClass().getName());
    private static EntityManagerFactory theEntityManagerFactory = null;

    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
        throws IOException, ServletException
    {
        EntityManager em = null;
        try
        {
            em = theEntityManagerFactory.createEntityManager();
            EntityManagerUtil.ENTITY_MANAGERS.set(em);
            chain.doFilter(request,response);
            EntityManagerUtil.ENTITY_MANAGERS.remove();
        }
        finally
        {
            try
            {
                if (em != null)
                    em.close();
            }
            catch (Throwable t) {
                itsLogger.error("While closing an EntityManager",t);
            }
        }
    }
    public void init(FilterConfig config)
    {
        destroy();
        theEntityManagerFactory =
          Persistence.createEntityManagerFactory("gliffy");
    }
    public void destroy()
    {
        if (theEntityManagerFactory != null)
            theEntityManagerFactory.close();
    }
}

So, when the web app gets deployed, the entity manager factory is created (and closed when the web app is removed). Each thread that calls EntityManagerUtil to get an EntityManager gets a fresh one that persists for the duration of the request. When the request is completed, the entity manager is closed automatically.

Using Java Persistence with Tomcat and no EJBs

Thursday, May 8th, 2008

The project I’m working on is deployed under Tomcat and isn’t using EJBs. The codebase is using JDBC for database access and I’m looking into using some O/R mapping. Hibernate is great, but Java Persistence is more desirable, as it’s more of a standard. Getting it to work with EJB3 is dead simple. Getting it to work without EJB was a bit more problematic.

The entire application is being deployed as a WAR file. As such, the JPA configuration artifacts weren’t getting picked up. Setting aside how absolutely horrendous Java Enterprise configuration is, here’s what ended up working for me:

  • Create a persistence.xml file as per standard documentation leaving out the jta-data-source stanza (I could not figure out how to get Hibernate/JPA to find my configured data source)
  • Create your hibernate.cfg.xml, being sure to include JDBC conncetion info. This will result in hibernate managing connections for you, which is fine
  • Create a persistence jar containing:
    • Hibernate config at root
    • persistence.xml in META-INF
    • All classes with JPA annotations in root (obviously in their java package/directory structure)
  • This goes into WEB-INF/lib of the war file (being careful to omit the JPA-annotated classes from WEB-INF/classes

The first two steps took a while to get to and aren’t super clear from the documentation.

To use JPA, this (non-production quality) code works:

EntityManagerFactory emf =
    Persistence.createEntityManagerFactory("name used in persistence.xml");
EntityManager em = emf.createEntityManager(); 

Query query = em.createQuery("from Account where name = :name");
query.setParameter("name",itsAccountName);
List results = query.getResultList();

// do stuff with your results

em.close();
emf.close();

The EntityManagerFactory is supposed to survive the life of application and not be created/destroyed on every request.

I also believe there might be some transaction issues with this, but I can’t figure out from the documentation what they are and if they are a big deal for a single-database application.

Update: Turns out, it’s not quite this simple. Since this configuration is running outside an EJB container, and given Bug $2382, you can query all day long, but you cannot persist. To solve this, you must work in a transaction, as so:

EntityManagerFactory emf =
    Persistence.createEntityManagerFactory("name used in persistence.xml");
EntityManager em = emf.createEntityManager();
EntityTransaction tx = em.getTransaction();

tx.begin();
Query query = em.createQuery("from Account where name = :name");
query.setParameter("name",itsAccountName);
List results = query.getResultList();

// modify your results somehow via persist()
// or merge()

tx.commit();
em.close();
emf.close();

Again, this is not production code as no error handling has been done at all, but you get the point.