The biggest challege to moving large amounts of data into production with (almost) zero downtime with MySQL is that the old table will be dropped and the new table locked while you’re loading the data. If you try that while serving a lot of traffic you’ll get into trouble as the database connections start stacking up waiting for the lock to free on that table and then you’ll hit max connections giving most of your users a 500 error.
If you need to move big tables live you basically have 2 options:
1. Schedule downtime: boo!
2. Import the new data into a table with an alternate name, then drop the old table and rename the new table to the old. The latter part of the operation is extremely quick under MySQL and we do it all the time while serving over a thousand page views a minute with nary a 500 error to show.
The standard version of mysqldump doesn’t offer an option to dump a table and import it into the new database under a temporary name and then quickly replace the old table. However, JDD (GreatSchools former sysadmin extraordinaire who now works for MySQL) submitted a patch a long while back that does exactly this. The patch offers a -R option such that when it writes out the dump file it imports the data into a table with a temporary name and then it does the almost instantaneous operation of dropping the old table and renaming the new table to replace it.
The patch currently only works on MySQL 4.0 but we have an in-house version we’ve ported to MySQL 5.0 (which I can post here once we’ve determined it’s stable). Now that we’re used to being able to move gigabytes of data live during production hours, I can’t imagine going back to having to regularly work late nights with scheduled downtime just to get new data live!
Does this technique work for innodb?
I’ve only tried it with MyISAM so far but there’s nothing in there that shouldn’t work with InnoDB. However, if there are foreign key dependencies on the table about to be reloaded then that would cause a problem.