It's now safe to backup InnoDB with mysqldumpBefore MySQL 5.6,
running mysqldump to make backup of InnoDB tables could cause
your backup to 'loose' some data. The problem is described in our
manual here.
In latest MySQL 5.6 this is no longer a problem, this means you
no longer risk 'loosing' data when using mysqlbackup together
with DDL statements on your InnoDB tables. If you are
interested in metadata-locking (MDL) you can read more about MDL
here.
To test this we need to create a few tables and also look at
which order mysqldump processes tables.
mysql> CREATE DATABASE ted;
mysql> USE ted;
mysql> CREATE TABLE `a` (`i` int(11) DEFAULT NULL)
ENGINE=InnoDB
mysql> CREATE …
MySQL Enterprise Backup (MEB) was born 3 years ago as a newly branded avatar of InnoDB Hot backup. Wanted to share what has gone on so far, how we at Oracle think about
backup, the milestones that we have achieved and the road ahead. The idea for this blog came to me after looking at Mikael's latest blog. While Mikael talks about MySQL, I want to talk about MEB.
When we started with InnoDB Hot backup the first challenge was to have it adhere to the development, quality and release processes for MySQL. This meant creating a
quality plan, getting it into the development trees of MySQL and ensuring that each piece of new code went through architecture and code review. Though the initial implementer and architect of Hot backup continues to work with the MEB team, there were a host of new engineers to be trained. We also …
[Read more]
We just released our first alpha of Percona XtraBackup 2.1 for
MySQL and with it we included the ability to encrypt backups
on the fly (full documentation here). This feature is
different than simply piping the backup stream through the
openssl or gpg binaries, which is what some people have used in
the past. A big benefit of using the built-in encryption is
that multiple CPU cores can be used for encryption (with
the --encrypt-threads
option). You can
also combine compression and encryption, each using multiple CPU
cores.
One advantage of …
[Read more]Send to Kindle
Hi guys, Early February Oracle released the new version of MySQL named 5.6, one of the enhancements is the GTID (Global Transaction ID)
GTID is an unique identifier which will be added at each transaction, and will be very useful on the slave. remember before we needed to set MASTER_LOG_FILE and MASTER_LOG_POS, now we don’t need them anymore.
Let’s see some new variables which we need to add to our cnf
file:
gtid-mode : It will enable GTID, in order to
this function work, we need to turn on log-bin and
log-slave-updates
enforce-gtid-consistency : It will guarantee
that only allowed command will be executed ( more information
here)
Basicly, is only this what we need to enable GTID, for this tutorial I will …
[Read more]
How do you implement a parallel algorithm for a software which
needs to be streamed to tapes?
How do you ensure that you have the capability to be able to tune
the level of parallelism for varying input and output devices and
varying levels of load?
These were some of the questions that we needed to answer when we
were trying to implement multi-threading capability for MySQL
Enterprise Backup (MEB).
The trivial way of achieving parallelism is by having the
multiple threads pick up the different files (in a file per
table) scenario. But this did not seem adequate because:
a) The sizes of these files (corresponding to the tables) could
be different and then one large file would limit the level of
parallelism since it would be processed by a single thread.
b) If you have to stream the backup how do you reconcile these
multiple files being streamed by separate threads? Large backups
are streamed directly to tape so it is …
The MySQL Enterprise Backup 3.8.1 release's main goal was support MySQL 5.6 server. But also beyond that primary goal MEB team added some valuable new options and features to ensure you'll get most from the new features in 5.6 as well. At a glance, here are some of the highlights,
MEB copy of InnoDB undo log tablespaces
MySQL 5.6 introduces a new feature to store undo logs in separate files called as undo tablespaces for improved performance. These undo tablespaces are logically part of system tablespace. All the commands associated with MEB - "backup", "apply-log" and "copy-back" now take care of the undo tablespaces in the same way as they process the system tablespace. MEB now supports innodb_undo_directory[logs][tablespace] option variables. When backup is executed, undo datafiles (up to number specified by innodb_undo_tablespaces) are stored in same directory as the datafiles of system tablespace. During …
[Read more]Send to Kindle
Hi guys, today let’s learn about how to have a consistent backup
(snapshot)
First of all, in what situations do we use a snapshot?
1. Lets say that your production server now will have a replica, how do you do the first load of data in this slave? what was the master bin log position when you started the backup, during the backup process, does anyone wrote any query to db?
2. In case you want to implement an incremental backup strategy, you can take a snapshot once a week and in case you need to restore you server, you just restore the snapshot and apply the binary logs.
Then, let’s start.
To grantee this data integrity we will need 2 sessions open on master, first one to lock all databases, second one to do the copy
Then, let’s go:
Session 1:
FLUSH TABLES WITH READ LOCK; SHOW MASTER STATUS;
You will receive an output like this:
…[Read more]If you need to automate backups, you might wonder about the different techniques available to you.
With regards to scheduling backups using built-in features of
MySQL, you have two main options:
- Either run mysqldump (or mysqlbackup if you have an Enterprise licence) from an operating system scheduler, for example in Linux using "cron" or in Windows using the "Task Scheduler". This is the most commonly used option.
- Alternatively, use the Event Scheduler to perform a series of SELECT ... INTO OUTFILE ... commands, one for each table you need to back up. This is a less commonly used option, but you might still find it useful.
Scheduling mysqlbackup with cron
mysqldump is a client program, so when you run it, you run it from a shell script, or at a terminal, rather than inside a MySQL statement. The following statement backs up the sakila …
[Read more]How To Back Up MySQL Databases With mylvmbackup On Ubuntu 12.10
mylvmbackup is a Perl script for quickly creating MySQL backups. It uses LVM's snapshot feature to do so. To perform a backup, mylvmbackup obtains a read lock on all tables and flushes all server caches to disk, creates a snapshot of the volume containing the MySQL data directory, and unlocks the tables again. This article shows how to use it on an Ubuntu 12.10 server.
Let's say you have a database that stores not only current transactional data, but also historic data that's unchanging. In particular, you have a large table containing hundreds of gigabytes worth of last year's data, and it won't change. Having backed it up already, you don't need to back it up every time. Is there any way to exclude this table from a backup?
For InnoDB tables with innodb-file-per-table enabled (the default as of MySQL 5.6), MySQL Enterprise Backup supports this feature in inverse. Specifically, you can choose to include specific innodb-file-per-table tables in addition to those stored in the system tablespace.
In order to exclude a specific table, you need to provide a regular expression to the --include option that includes every table except the one you want to exclude. For example, in my sakila …
[Read more]