One of the tough decisions many people face regularly with DAOs and business facades is how best to test them. At work our current solution is that we let DAOs interact with the database which we've prepopulated with DBUnit and in many cases we let business facades interact with real DAOs. However, you could correctly argue that the test is no longer a unit test. Terminology aside though, the question remains, which approach is better for testing your DAO and business layer.
On the positive side, as long as your DAO and business facade tests can interact with the database it makes writing tests much easier and therefor I believe encourages developers to write more tests which is a good thing.
The downside though is that as your unit tests add up, running the full test suite can take a very long time. In terms of speed, having the majority of your JUnit tests be true “unit” tests that only test the unit of work you are concerned with is optimal. However, in cases where you have numerous or complex dependencies as is common in a business facade, your unit tests can get quite unruly and lengthy just to mock the dependencies.
Therefore the guidelines I work by are:
1. DAOs always get to interact with a real database prepopulated with DBUnit. I just don't have it in me to mock a Jdbc layer or Hibernate session and quite frankly I don't think it makes for as useful of a test when those layers are mocked.
2. The business layer should use a mocked DAO layer when it's not too dificult and reasonable. However, when dealing with a complex object graph or dependencies that would require undo amounts of mock setup, then I prefer to just use the real thing to keep the JUnit code simple and readable.
How do you approach testing your DAO and business layer?
Dion (whose blog I enjoy) wrote an interesting blog entry about forking a dependency in Java versus Ruby.
For example say you need to patch Hibernate or ActiveRecord with a fix that hasn't been released yet. Then you want to roll out that fixed/forked version to your team, then once the new release comes out with the patch, you want to switch the team back to the main release. Of course with a team you want to make this as painless as possible, ideally, without them having to do anything.
In Java (especially using Maven) I find it relatively simple to fork a jar or plugin with my own patches and have the team start using that transparently, without them having to know or do anything beyond a CVS update. Then when a new release of say Hibernate comes out with the fix in question I just update the dependency in project.xml and the whole team automatically switches over to using the standard distribution again. Many team members may not ever know or notice the switch even happened.
Now, with Ruby I'm not even sure how I would go about transparently getting the team to use a forked version of ActiveRecord without having to go around and ask each of them to do something. Or perhaps there's an easy way to do this with Ruby that I'm not familiar with?
One of our systems administrators at work setup our zero downtime deployment solution with Tomcat which so far has worked pretty well for us. We needed a zero downtime deployment solution because we strive to practice agile development and therefor launch a new release of our site every 2 weeks, plus we need the ability to launch a bug fix without taking the site down. Lastly, we were able to do zero downtime deploys with our old Perl/Modperl code base so it wouldn’t have helped our case to port from Perl to Java if we now had to incur downtime to launch a new release.
This is one area where Perl, Ruby, Pathon, et al shine because you can cut over to a new release in a matter of seconds by doing “mv website website.old; mv website.new website; cluster apachectl graceful”. You can also patch a file in the release branch and just move that file live without having to redploy the whole release. Lastly you can add debug log messages to a live class file just by editing it on the production site when the fit really hits the shan and you can’t reproduce the bug on your dev or staging environments. Java has a lot of other merits though such as excellent IDE’s with extermely powerful refactoring abilities, OR mapping, plethora of MVC frameworks, etc… which is why we chose it to replace Perl.
As it stands with our current setup we can launch a new release of the site with zero downtime unless incompatible database schema changes need to go live. In those cases we’ll schedule a maintenance window late at night to launch a release. Nobody likes staying up late at night though so I also view zero downtime deploys as advantageous for employee retention.
Here’s our current setup for zero downtime deploys:
1. We use replicated sessions in Tomcat so that the load balancer can bounce users around from one app server to another as it sees fit.
2. Our configuration is such that each physical server runs Apache, mod_jk (not mod_jk2), and a Tomcat instance.
3. Each mod_jk is configured to favor using the Tomcat on the local machine but can failover to a Tomcat running on another machine if the local instance gets shutdown.
To deploy a new WAR we have a script we call that we pass the path to the WAR file to. In the future we want to add functionality to this script to automatically SCP the WAR over from our staging/qa server, checksum it, and then deploy it. Here’s what the script currently does:
1. It SSHs to each application server sequentially (using public/private key pairs to authenticate without a password).
2. Shuts down Tomcat and then Apache’s mod_jk automatically fails over to another still runnning Tomcat on another machine.
3. Drops the new WAR into place and starts up Tomcat and then waits until Tomcat is again listening on port 8009. At that point mod_jk on the local server will start using the local Tomcat again.
5. Moves onto the next application server and repeats until it’s gone through all the app servers and our upgrade is complete.
Building on my most recent post, another objective we have at work is that each build process (which maps to one CVS module) generates one and only one artifact (e.g. a single WAR, plain old Jar, or a Javaapp Jar). There are certainly dependencies between modules but any deployable artifact that gets built (website.war, batchjobs.jar, xmlfeeds.jar, etc…) should be self-contained and include all of its dependencies such as Spring, Hibernate, etc.
This brings me to the Javapp Jar which I'm a huge fan of. A WAR is great because it can contain class files, JSP's, jar dependencies, and what have you. When you want to deploy it into production you have one self contained file that the release engineer can move live. A Javapp Jar is basically the same thing, it's a Jar file that contains your classes and files as well as all of those of your dependencies. For non Maven users you can also do this easily with ant using the zipgroupfileset task.
This makes the job of the release engineer much easier. For example rather than having to deploy batchjobs.jar, spring.jar, hibernate.jar and 10 other dependencies to our production batch processing server, instead we have a single batchjobs.jar that gets moved to production. That single Jar file contains our batch job classes, our bizlogic jar, spring, hibernate, and everything else in a single self contained file.
One of our objectives at work with our build process is that whichever artifact (jar/war) gets built, that it can be exactly the same everywhere, whether it's running on your workstation, cruise-control, qa staging server, production, etc. In other words, when we go to deploy the production WAR we don't want to have to build a custom “production” version just because the log4j properties or database connection information is different between deployment environments. My feeling is that this is a very good thing because when the WAR passes QA, we can just take the WAR from the QA server and move it into production.
There are many different ways to achieve this goal. Karl Baum blogged about his approach a while back. I thought I'd share how Andy set this up at work using Spring. So far it's been working great for us.
Essentially we allow properties in the WAR or Jar to be overridden using java run-time parameters as follows: java -Dhibernate.dbhost=mysql-prod.domain.com -Dhibernate.connection.username=produser -Dhibernate.connection.password=prodpass etc…. So in each environment we we have JAVA_OPTS set so that when we run Tomcat or run our batch jobs that reside in a Javaapp Jar, we don't have to build a new artifact for each environment.
Spring comes to the rescue here with it's propertyPlaceholderConfigurer bean. Here's what it looks like:
<!-- allow system properties to override ours -->
So by default the WAR or Javaapp Jar will get it's properties such as database connection information from database.properties but anything in database.properties or deployment.properties can be overridden.
The major downside with this approach is that it won't work well in a hosted environment with lots of WAR files deployed on the same app server. If you've tackled this problem in a different way that works let me know how you've gone about it!
I’ve finally decided to make the switch to Subversion for all of my personal work which includes source control of software projects as well as managing documents across multiple work and home computers and operating systems.
Here are the 3 things I’ve noticed so far that are just great:
1. Subversion keeps a local copy of file history so I can do a svn diff or svn status offline and it doesn’t need to go back to the server.
2. If I want to see which files I’ve modified or need to be added I just run svn status. I’ve never liked having to do a cvs -nq update.
3. Renaming and moving files and directories is fully supported, Amen!
I prefer to do my SCM over SSH so I don’t have to deal with yet another daemon process, apache modules, and so on. So I’m running Subversion over SSH, just like I used to with CVS. The only downside with this approach is to make it usable (just like with CVS) you really need to use public keys for SSH authentication so you don’t have to type your password over and over and over.
Here’s what was involved in setting up Subversion on my server:
1. On my Linux server I installed the Linux RPMs. If you don’t have root access to your machine you can also build it from source and install it under your users home directory.
2. Then I created a directory to hold the subversion repositories under my home directory: mkdir /home/myuser/svn
3. Then I created a repository under that directory: svnadmin create –fs-type fsfs svn/myproject. Note I’m using the newer filesystem (fsfs) type repository because it uses less space and I’ve heard it’s more stable.
Then to use the repository from my Windows desktop I did the following:
1. I’m a Cygwin user so I just ran Cygwin’s setup.exe and opted to install Subversion. You can also download and install the packages without Cygwin.
2. Then I went to the parent directory containing the directoy I wanted to import and ran the following: svn import myprojectdir svn+ssh://firstname.lastname@example.org/home/myuser/svn/myproject -m “initial import”.
3. Then to make sure it worked I try a checkout: svn checkout svn+ssh://email@example.com/home/myuser/svn/myproject myproject.
To install Subversion on my Linux desktop I just installed the Subversion RPMs like I did on my server. On MacOSX you can use fink to install the it.
Lastly, for managing documents in Windows I’ve always enjoyed TortoiseCVS because it integrates directly with the file explorer, I’m hopeful that TortoiseSVN will work just as well but have yet to try it.
I’ve been looking for a good Ruby IDE as I continue to learn Ruby on Rails. So far the one that stands out the most is the Arachno Ruby IDE, it looks very full featured, relatively polished, and includes an Apache server (only on Windows) which you can run from the IDE.
The other two that looks interesting are:
1. The Mondrian Ruby IDE
2. RDT or Ruby Development Tools which is an Eclipse plugin, nice for us Java types.
I’ve also heard of a lot of people using TextPad for the the purpose or Vim and Emacs but those are a little too low level for my day to day programming taste.
If you’re doing any kind of development in Ruby, what are you using?
Update: I’ve started using the Ruby Plugin for Jedit which has nice IntelliJ like code completion features and API integration. It’s a bit of a hassle to install but otherwise very nice!
When writing JSPs using Spring MVC and Struts I've found myself hard-coding URL's and it just doesn't feel right. Later when I decide to change the URL of a page I have to go through and search replace that URL in a dozen other pages… and it pains me!
Tapestry uses the PageLink to do this quite gracefully. In your Tapestry HTML files you simply put <a href=”#” jwcid=”pageX”>Click me</a> and then Tapestry will fill in the correct URL of pageX.
David Geary also blogged about adding this feature to Shale for JSF.
Basically, my feeling is that the actual URL of a page should only live in one place and I should be able to change it to my hearts content without having to edit all of my other JSP files to point them to the new location.
What are some best practices or ideas for JSP based apps to help me avoid hard-coding URLs in the pages?
Update: so far I've received one good suggestion of using a resource bundle to store the actual URLs and using a taglib to fill in the values.
Update 2: one other point I neglected to mention is that having your application resolve the URLs for you has the added benefit that when there is a bad link you should get a page compilation error. Tapestry does this through the PageLink component which means you won't get a page in production that has a bad application internal URL. Whereas with Struts and SpringMVC it's much easier to get a bad URL into production because it either requires a QA person to try clicking on each link in the page, or you need an automated link validation test.
There is a distinct lack of tabbed terminals for Windows. I’ve used SecureCRT 4, Putty, and more recently I’ve been using Rxvt under Cygwin (which doesn’t require X). However, none of these offers me tabbed terminals.
The latest version, SecureCRT 5 now supports tabs which is nice to see, however, it’s mainly designed to connect to remote machines which only solves half of my problem. The other half is that I want a tabbed terminal that gives me a Cygwin bash shell on my local machine.
Under Linux and FreeBSD we have the tabbed KDE Konsole, the Gnome Terminal, the Multi Gnome Terminal, MRxvt, and finally under MaxOSX we have iTerm.
Why are there no good tabbed terminals for Windows, or do I just not know about them?
In one of my side-ventures, which I’ve written about before, we develop and sell Linux and Java based nmea wireless navigation servers for the marine market. The devices run a small embedded Linux distribution on compact flash so we regularly need to write the image to new CF cards before sending out a new unit.
To setup a new batch of units I’d been using Linux and dd to write an image to the compact flash card. In a pinch I’ve also used VMWare with a virtual machine running Linux to do this job in the past.
However, this weekend I took a few new CF cards that needed to be imaged out of town along with my laptop determined to find a way to do it under Windows directly. Not surprisingly the answer was once again Unix in the form of Cygwin. I couldn’t find any way to do this with native windows tools. So for the 0.001% of you who read this blog and also need to backup/restore compact flash cards, here’s how you do it under Cygwin:
1. Figure out which Cygwin device the compact flash card is. To do this first insert the card that you want to backup and then run cat /proc/partitions. If you have a new 256MB CF card in your card reader you should see an entry like this: 249007 sdb. If could be sdc, sde, etc… but you should be able to identify the device based on the size of CF card in the slot.
2. To backup or make an image of a CF card (assuming /dev/sdb is your CF card) run dd if=/dev/sdb of=somefilename.dd bs=1M. Next do a chmod a-w somefilename.dd so you don’t accidentally overwrite the backup if you switch the if and of parameters in the next step.
3. Now when you get a new CF card that you want to write the image to, put the new CF card in the slot and run dd if=somefilename.dd of=/dev/sdb bs=1M.
Now your new CF card should be exactly the same as the old one. Ofcourse with Mac OS X and Linux, dd should already be installed on your system so you don’t need to install Cygwin.