Large databases, long mysqldump times, long waits for globally locked tables. These problems basically never go away when you rely on mysqldump with –all-databases or a list of databases, as it dumps schemas serially. I’m not going to explain serial vs parallel processing here since that’s a larger topic. Suffice to say that in these days of multi-core / multi-cpu servers we only make use of one processor’s core when we serially export databases using mysqldump. So, I have a new script that attempts to alleviate those issues and now I need testers to provide feedback/improvements.
In order to keep some sanity when dealing with hundreds of database servers, the script takes care of the following:
- low global locking time requirements: solved by parallel tasks / forked processes
- backup file checking: with mysqldump files; it checks for “–Dump completed” at the end of the sql file …