On a daily basis I continue to be unimpressed with MySQL when using MyISAM tables. Today I discovered that if you set a field to be non-nullable and then do an insert without specifying the field, MySQL happily completes the insert when I should be getting an error. Here’s an example:
create table test_state (
id bigint not null auto_increment,
date_created datetime not null,
state varchar(2) not null,
primary key (id)
insert into test_state (date_created) values (now());
insert into test_state (state) values ('WY');
select * from test_state
| id | date_created | state |
| 1 | 2005-02-17 09:25:28 | |
| 2 | 0000-00-00 00:00:00 | WY |
You’ve got to be kidding me! It would be better if “not null” had no meaning and were simply ignored, whereas this confuses the issue even more. Now, I don’t profess to being a MySQL expert by any stretch and I know this is probably solved with InnoDB, but IMHO this shouldn’t work in MyISAM either! Oh how I miss the days of working with PostgreSQL, Oracle, and DB2.
That said I do enjoy the low administrative overhead of MySQL, it’s low memory footprint, and the excellent gnu readline and pipe support built into the client.
Almost every new web project starts with writing the Ant code to build a WAR. This build process is duplicated with some variance on every web application project I've ever worked on that uses Ant.
We lift our noses when we find developers copying and pasting code, yet why do we think it's OK to copy and paste the build process. In my opinion it is not! Using Maven I can start a new project with a nice build process and managed dependencies in no time flat. In Ant, to do the same, I'll copy and paste a build.xml I've used on another project and then begin copying jars into my lib directory. All of a sudden I have tremendous duplication between two projects. What's worse though is that almost every other company developing Java web applications is also writing the build.xml to build their artifacts (jars, wars, and ears) in almost the same way.
I want a build system that manages my dependencies and already knows how to run XDoclet, Unit tests, build javadoc, build a WAR, etc… As long as I put my files where they are expected to be (read sensible defaults) then this build system should be able to build my project almost out of the box. Of course when I need to do custom operations or override a default I need to be able to do that using a property or by breaking out into Ant. These are the reasons why I use Maven. I admit it's not a highly polished and well documented system (which Ant is) but I think Maven is the direction I want to see higher level build systems moving in.
I think Ant and Maven have a very complementary relationship. Maven is a nice high level build system and Ant is a nice low level build system and since Maven let's me use Ant when I want to I'm happy.
Chris has posted a 10 minute video strutting some of the nice features of developing with Trails (built on Ant, Tapestry, Spring, XDoclet, and Hibernate). This was inspired by the now famous Ruby on Rails video.
In theory I like the concept that you can prototype a web application and then fill in the details such as unit tests, dao layer, business logic, business facades, and web layer as your needs for complexity grow. Other solutions that I've seen in the past that attempt to provide this type of RAD environment ultimately seemed to hinder flexibility later in the project. With Trails that doesn't appear to be the case though.
At work our data is state specific and for each state we have a lot of data. It was decided to create a separate database for each state in MySQL long before I joined the company and now we have a lot of legacy infrastucutre that depends on it. In Oracle I would have solved it by creating a partitioned table in which you can partition the data based on the value of a field (yet still have it be one table). I believe partitioned tables are a feature of MySQL cluster of which I hope to learn more about in the future.
In moving the company from Perl to Java one of my challenges has been creating the OR mapping layer. I chose Hibernate and therefor wanted the ability to use Spring's Hibernate DAO support for the generic unchecked data exceptions. I also needed to have one class map to multiple databases depending on which state the data was specific for. Rather than make the project specific to our domain though I figured this is a general problem that other companies using open-source databases have solved in the same way. Especially when their data gets too large for one database and can be easily partitioned. In our case we have a main shared database and then state specific databases to store all state specific data (which is the bulk of our data).
To solve the OR mapping issue I created a generic maven / hibernate / xdoclet / dbunit kickstart project with a generic BaseDAOHibernate layer based on Appfuse as well as my own BaseDAOPartitionHibernate to easily create DAO's for partitioned tables. I still use Spring's Hibernate DAO support, but I had to add a new HibernateDaoPartitionSupport class based on Spring's HibernateDaoSupport class.
It's not very polished yet but if you're trying to use hibernate with identical schemas across multiple databases hopefully my partition-dao-kickstart-0.1.tgz project can help you get on the right track.
Thanks to Jeremy’s blog I discovered the MC Hawking CD. I’ve been listening to it for a few days and if you’re a rap loving nerd, this CD is for you! Don’t miss the flash movie.
Mike Clark made this short quicktime video on continuous integration with Cruise Control. I love these kinds of videos (similar to the Ruby on Rails video) and would like to see more of them.
At my company we have 50+ MySQL databases on one server that have the same schema for the purpose of partitioning data (due to the size of each database). The decision goes back before my time and of course there is a tremendous amount of spaghetti infrastructure (web, batch processing, cron scripts, reporting, etc…) relying upon this arrangement. The data is state specific so for the purpose of this discussion we'll say all schemas are identical. If I need California data I can connect to that database and look it up the same way I would look it up in say the Maine database. One handy feature in MySQL is that I can get a connection to any database on the server and then access data in another database via that connection. For example I can connect to the Ohio database and then select * from florida.sometable;. You'll see why this is possibly handy in my discussion about iBatis below.
In moving our company from Perl to Java one of the main challenges for me has been sorting out our approach to the ORM layer. I'm using Hibernate on the unique databases that have unique tables. However, I'm struggling to find an elegant way to work with Hibernate to have one class map to tables in 50+ different databases and have come up with the following possibilities (that I still need to test) listed in order of my perceived level of simplicity:
- Use iBatis for these multi-database tables and have Spring maintain 1 database connection and use variable substitution as part of the SQL map statements: select * from #databasename#.table. This approach seems simple and promising on the surface. Downside is programmers will need to know two ORM packages and it takes a little longer to write a DAO/POJO combo in iBatis than Hibernate (at least for me).
- Use Hibernate and have Spring maintain the 50+ hibernate session factories and the 50+ datasources. I would then have Spring dependency inject a class that allows the DAO to fetch the appropriate hibernate session factory.
- Use JDBC and have Spring maintain 1 database connection. Then I would use select * from databasename.tablename and then manually do the ResultSet to POJO mapping.
- Put the company on hold and have engineering focus for a few months on refactoring the 50 databases down to one that support partitioning and refactor the legacy web, batch processing, and reporting systems. This one I feel is a little too costly and risky since I can't think of a way to take an iterative approach due to the number of systems involved.
If you have ideas please chime in!
I’ve had the same @iname email address for about 8 years now. I’ve been using Mail.com’s Iname forwarding service to forward to wherever I’m keeping my mail (usually on a Linux box I host) so that whenever I want to move my actual email account somewhere else I can. However, every year the iname forwarding service seems to get a little worse with numerous outages. Considering Go Daddy offers domains for around 8 bucks a year which includes mail forwarding I decided to make the switch.
Here’s my new completely convoluted setup which I’m trying out instead of iname coupled with SpamAssassin on my linux box (which just hasn’t been cutting it for me as a spam filter, perhaps I don’t have it configured correctly):
I have my new Go Daddy registered domain forward email to my GMail account. My GMail account spam filters and then forwards to my linux box (an address I don’t give out so I can change it when I need to). On the Linux box I use Pine, IMP, or IMAP to read my mail, respond, and compose messages with the From header set to my GoDaddy registered domain.
The main reason I did all of this is so my email address will never need to change again. I thought that was going to be the case when I went with iname but alas vendor lock-in is often problematic over the long haul. With complete control over my mail setup and how I forward it, spam filter it, etc… I’m optimistic that I can happily go about using my new email address for the foreseeable future.
I've finally completed a project that I've been thinking/scheming over for a couple of years. The java based Marine Wireless NMEA Navigation Server. Right up there with software engineering, my other really big passion is boats. I love sailing and Susan and I own a 38 foot cutter named Sugata. Most boats transfer navigation data (wind speed/direction, boat speed/course, GPS, autopilot, etc…) about via serial cable using the NMEA protocol. Typically this limits you to having one laptop connected to all of your instruments in one place. I started thinking it would be cool to make the data available wirelessly so that any number of handhelds or laptops could access the data on a boat. This becomes more important as you get into larger and larger boats.
I needed a small 12 volt driven wireless access point with a serial connector (to connect to the NMEA devices). After a lot of searching I finally decided to use a Soekris board with 32MB ram and Pebble Linux running on a 256MB CF card (read-only) with a 200mW 802.11 transmitter with an external antenna. With 32MB ram I originally wrote the NMEA multiplexing/server software in J2ME but finally decided to try J2SE with a very low (16MB) max memory setting and sure enough it worked fine even under full NMEA data loads (which are admitedly pretty low). Then I needed a web interface to allow users to configure the wireless navigation server so they could easily change the administrator password, SID, encryption, download the system log to send me for debugging, and so on. For that I opted for Perl using FastTemplate (mainly because I was lazy and it could easily run in the remaining 12MB or so) running inside the tiny busy box http server. After I had the web interface mostly complete I also needed a way for users to upgrade the firmware/CF card (when I released new versions or fixed bugs). This is where you just have to love all of the Unix tools available. I was able to write the firmware upgrade portion in a couple of hours using familiar tools like md5, tar, bzip2, and mount (to remount the CF card read-write).
After I had the device built, working, and tested I put out the word for some beta boats to try it out on. The response was excellent with far more boats interested than I could afford hardware for. I got interested responses from commercial fisherman, racing sailboats, and many many trawlers. Having something work in the lab (my living room) is always completely different from reality as I soon discovered. There were a lot of bugs to fix and usability issues to address to get it working. I'm sure there's still a lot of work needed since it's only been tested out in a handful of configurations.
One company starting around the same time I started this project was Rose Point Nav. They make commercial navigation software and they added direct network support to their software to make it really easy to interface their software with my wireless navigation server which was pretty cool of them.
The biggest challenge in the whole process for me has been motivation (since I have a full-time job) and writing the manual. It's difficult to make installing/configuring a device like this easy for the novice. Heck, companies like LinkSys have whole teams developing their products and manuals and many people still run into problems. I received help from friends and a business partner on the manual but writing the software was basically a solo effort. I'm going to start out by selling my wireless NMEA navigation server directly on the web (using the open-source osCommerce platform). If it doesn't really take off I plan to open-source the software so other people can build their own wireless navigation servers. Either way, it's been an interesting and exciting experience!
In learning Maven over the weekend I set about the process of creating a Maven Data Access Kickstart project (with Spring, Hibernate, JUnit, DBUnit, and XDoclet). Here's my Mavenized data access kickstart project if you're interested: maven-dao-kickstart-0.1.tgz
One of the main projects I'm working on now will involve a data access/business logic project and a separate web interface project which will depend on the data acess/business logic project. With that in mind I used Maven to implement the above kickstart project. I borrowed some code from AppFuse as well as some of the hibernate/dbunit configuration from Rick Hightower's fine blog.
Matt Raible really pioneered the concept of a java kickstart project (AFAIK) with the introduction of AppFuse which is the most comprehensive java web kickstart project I know of. I would love to see a repository of java kickstart projects!
Update: I've started working with maven multiproject and I'm totally sold! I'm already excited about the time savings with the plugins and managing dependencies.