There was one MySQL server with a Adaptec Raid controller and 4
disks. One of the disks was having media errors and caused the
whole SCSI bus to become unavailable.
This resulted in a corrupted InnoDB table.
Luckily we did have backups. A full backup and
incrementals.
So to restore the backups I installed XtraBackup and MySQL 5.5 on
another server.
Then the first step was to 'prepare' the backup. This worked okay
for the full backup (redo only).
The second step to add the incremantals failed for the first
incremental. This was easily resolved by specifying the full
paths instead of relative paths.
Then the backup was fully prepared using the redo logs and undo
logs.
As XtraBackup doesn't backup your my.cnf we copied the my.cnf
from another server and adjusted it for this server. The my.cnf
in your backup only contains everything needed for a …
While doing some routine fiddling regarding some topic I've now
forgotten, I discovered that ndb_restore
was doing
something quite surprising. It's been common wisdom for some time
that one can use ndb_restore -m
to restore metadata
into a new cluster and automatically have your data
re-partitioned across the data nodes in the destination cluster.
In fact, this was the recommended procedure for adding nodes to a
cluster before online add node came along. Since MySQL Cluster
7.0, though, ndb_restore
hasn't behaved that way,
though that change in behavior doesn't seem to be documented and
most don't know that the change ever took place.
I'll go through some of the methods you can use to find
information about the partitioning strategy for an NDB table,
talk a bit about why ndb_restore
stopped working the
way most everyone expected (and still expect) it to, and discuss
some possible …
Yesterday I was discussing with a fellow DBA about ways to check
the status of existing and/or past RMAN jobs. Good backup scripts
usually write their output to some sort of log file so, checking
the output is usually a straight-forward task. However, backup
jobs can be scheduled in many different ways (crontab, Grid
Control, Scheduled Tasks, etc) and finding the log file may be
tricky if you don’t know the environment well.
Furthermore, log files may also have already been overwritten by
the next backup or simply just deleted. An alternative way of
accessing that information, thus, may come handy.
Fortunately, RMAN keeps the backup metadata around for some time and it can be accessed through the database’s V$ views. Obviously, if you need this information because your database just crashed and needs to be restored, the method described here is useless.
Backup jobs’ status and metadata
A lot of metadata about …
[Read more]A user on a linux user group mailing list asked about this, and I was one of the people replying. Re-posting here as I reckon it’s of wider interest.
> [...] tens of gigs of data in MySQL databases. > Some in memory tables, some MyISAM, a fair bit InnoDB. According to my > understanding, when one doesn’t have several hours to take a DB > offline and do dbbackup, there was/is ibbackup from InnoBase.. but now > that MySQL and InnoBase have both been ‘Oracle Enterprised’, said > product is now restricted to MySQL Enterprise customers.. > > Some quick searching has suggested Percona XtraBackup as a potential > FOSS alternative. > What backup techniques do people employ around these parts for backups > of large mixed MySQL data sets where downtime *must* be minimised? > > Has your backup plan ever been put to the test?
You should put it to the test regularly, not just when it’s needed. …
[Read more]Backups are an integral and very important part of any system. Backups allow you to recover your data and be up and running again, in the advent of problems such as system crashes, hardware failures or users deleting data by mistake. I had been evaluating backup solution for a while but to be honest I really wasn't satisfied with the solutions available until I came across XtraBackup and I am loving it since. In this post I intend on showing how to do backups and restores using XtraBackup.
I don't usually post these simple tricks, but it came to my
attention today and it's very simple and have seen issues when
trying to get around it. This one tries to solve the question:
How do I restore my production backup to a different
schema? It looks obvious, but I haven't seen many people
thinking about it.
Most of the time backups using mysqldump will include the
following line:
USE `schema`;
This is OK when you're trying to either (re)build a slave or
restore a production database. But what about restoring it to a
test server in a different schema?
The actual trick
Using vi (or similar) editors to edit the line will most
likely result in the editor trying to load the whole backup file
into memory, which might cause paging or even crash the server if
the backup is big enough (I've seen it happen). Using sed
(or similar) might take some time with a big …
Applying binary logs to a MySQL instance is not particularly
difficult, using the mysqlbinlog
command line
utility:
$> mysqlbinlog mysql-bin.000003 > 03.sql $> mysql < 03.sql
Turning off binary logging for a session is not difficult, from
the MySQL commandline, if you authenticate as a user with the
SUPER
privilege:
mysql> SET SESSION sql_log_bin=0;
However, sometimes you want to apply binary logs to a MySQL
instance, without having those changes applied to the binary logs
themselves. One option is to restart the server binary logging
disabled, and after the load is finished, restart the server with
binary logging re-enabled. This is not always possible nor
desirable, so there’s a better way, that works in at least
versions 4.1 and up:
The mysqlbinlog utility has the --disable-log-bin
option. All the option does is add the SET …
rsnapshot is a filesystem snapshot utility for making backups of local and remote systems, based on rsync. Rather than just doing a complete copy every time, it uses hardlinks to create incrementals (which are from a local perspective a full backup also). You can specify how long to keep old backups, and all the other usual jazz. You’d generally have it connect over ssh. You’ll want/need to run it on a filesystem that supports hardlinks, so that precludes NTFS.
In the context of MySQL, you can’t just do a filesystem copy of your MySQL data/logs, that would be inconsistent and broken. (amazingly, I still see people insisting/arguing on this – but heck it’s your business/data to gamble with, right?)
Anyway, if you do a local mysqldump also, or for instance use …
[Read more]Since writing and speaking a bit more about the “relax! a failure is not an emergency” concept, more and more people approach me with interesting horror stories. I’m scribbling a few backup-related ones here for your enjoyment - and naturally there are important lessons.
Story 1: A place makes backups that get shipped off-site, interstate even. One day a couple of files are lost, and so someone files a request to retrieve said files from the archive. Well, apparently that’s something that should be done as it creates some very stressed responses and a quoted timeline of a few weeks. In the end the issue is resolved through other means and the request stopped - unfortunate, since it would have been very interesting if the requested files would actually ever arrive… clearly a retrieval was not part of the expected process. One also wonders how long a full dataset retrieval would take, or if it’s even possible!
…[Read more]Sure, you've heard it before: [some company's logo] has a new MySQL backup tool that promises to solve all of your data recovery needs. The good news is most of these tools work pretty well. However, they tend to suffer from a similar set of limitations. Most require sophisticated infrastructures or complex setup and maintenance and can become a resource drain for some organizations. You're probably wondering, "Why can't someone build a fully automated MySQL backup solution that you can just turn on and forget?"
I am happy to say that the MySQL Developers at Sun are doing just that. In fact, a prototype will be demonstrated at the 2009 MySQL Users' Conference that will show the feasibility of a fully automated MySQL backup and recovery tool. It's being called the MySQL Time Machine and (with all due respect to all vendors with products of similar names) it allows you to recover your data using a datetime value. How cool is that? Even MySQL …
[Read more]