I am pleased to announce that the MySQL Database 5.6.4
development milestone release ("DMR") is now available for
download (select the Development Release
tab). MySQL 5.6.4 includes all 5.5 production-ready features and
provides an aggreation of all of the new features that have been
released in earlier 5.6 DMRs. 5.6.4 adds many bug fixes
and more new "early and often" enhancements that are development
and system QA complete and ready for Community evaluation and
feedback. You can get the complete rundown of all the new
5.6.4 specific features here.
For those following the progression of the 5.6 DMRs as the trains
leave the station, you should bookmark these MySQL …
MySQL::Sandbox 3.0.24 was released yesterday, with many new features. More than vanilla MySQLIf you have missed my previous announcement, here's the gist of it. MySQL Sandbox can now deal with tarballs from either Percona Server or MariaDB. The main difference after this change is that you can now create a directory called <PREFIX>5.5.16 and make_sandbox will recognize it as well as the plain 5.5.16.
$ make_sandbox --export_binaries --add_prefix=ps \[Read more]
Percona-Server-5.5.11-rel20.2-114.Darwin.i386.tar.gz \
-- --sandbox_directory=msb_ps5_5_11
unpacking Percona-Server-5.5.11-rel20.2-114.Darwin.i386.tar.gz
[…]
installing with the following …
In previous posts I described how row conflicts are detected
using epochs. In this post I describe how they are handled.
Row based conflict handling with NDB$EPOCH
Once a row conflict is detected, as well as rejecting the row
change, row based conflict handling in the Slave will :
- Increment conflict counters
- Optionally insert a row into an exceptions table
For NDB$EPOCH, conflict detection and handling operates on one Cluster in an Active-Active pair designated as the Primary. When a Slave MySQLD attached to the Primary Cluster detects a conflict between data stored in the Primary and a replicated event …
[Read more]When an open source project becomes popular, bug reports start flocking in. This is both good and bad news for the project developers. The good news is that someone is using the product, and they are finding ways of breaking it that we didn't think of. The bad news is that most of the times the reporters assume that the developers have super human powers, and that they will find what's wrong by the simple mentioning that a given feature is not working as expected. Unfortunately, it doesn't work that way. An effective bug report should have enough information that the ones in charge will be able to reproduce it and examine in lab conditions to find the problem. When dealing with databases and database tools, there are several cases, from simple to complex. Let's cover them in order. Installation issuesThis is often a straightforward case of lack of functionality. When a tool does not install what it is supposed to, it is a show stopper, and a solution …
[Read more]
The last post described MySQL Cluster epochs and why
they provide a good basis for conflict detection, with a few
enhancements required. This post describes the
enhancements.
The following four mechanisms are required to implement conflict
detection via epochs :
- Slaves should 'reflect' information about replicated epochs
they have applied
Applied epoch numbers should be included in the Slave Binlog events returning to the originating cluster, in a Binlog position corresponding to the commit time of the replicated epoch …
Before getting to the details of how eventual consistency is
implemented, we need to look at epochs. Ndb Cluster maintains an
internal distributed logical clock known as the epoch,
represented as a 64 bit number. This epoch serves a number of
internal functions, and is atomically advanced across all data
nodes.
Epochs and consistent distributed state
Ndb is a parallel database, with multiple internal transaction
coordinator components starting, executing and committing
transactions against rows stored in different data nodes.
Concurrent transactions only interact where they attempt to lock
the same row. This design minimises unnecessary system-wide …
For the impatient ones, or ones that prefer code to narrative, go here. This is long overdue anyway, and Yoshinori already beat me, hehe…
Our database environment is quite busy – there’re millions of row changes a second, millions of I/O operations a second and impact of that can be felt at each shard. Especially, as we also have to replicate to other datacenters, single threaded replication on MySQL becomes a real bottleneck.
We use multiple methods to understand and analyze replication lag composition – a simple replication thread state sampling via MySQL processlist helps to understand logical workload components (and work in that field yields great results), and pstack/GDB …
[Read more]
I was at an event recently and the topic of replication stirred
the curiosity of the audience. A few audience members had Master
to Master but they wanted to move away from that. Others had
multiple slaves but wanted little downtime and backend work if a
master failed. Relayed replication or a database chain is an
option to solve some of their issues. Another option would be
Cluster but it depends on your infrastructure, budget and
application, some of them are looking into this as well. Is
a chain the best solution for everyone, of course not. While I
cannot do consultant work to help them, I can blog about it…
A relayed replication environment allows you to, of course, have replicated databases but also have a replication environment that is still available if or when the master …
[Read more]
What do cruise ship management software and data warehouses have
in common? One answer: they both depend on
intermittent data replication. Large vessels collect data
to share with a home base whenever connectivity permits. If
there is no connection, they just wait until later. Data
warehouses also do not replicate constantly. Instead, it is
often far faster to pool updates and load them in a single
humongous batch using SQL COPY commands or native loaders.
Replicating updates in this way is sometimes known as
batch replication. Tungsten Replicator supports it quite
easily.
To illustrate we will consider a Tungsten master/slave
configuration. (Sample setup instructions …
Working with replication, you come across many topologies, some of them sound and established, some of them less so, and some of them still in the realm of the hopeless wishes. I have been working with replication for almost 10 years now, and my wish list grew quite big during this time. In the last 12 months, though, while working at Continuent, some of the topologies that I wanted to work with have moved from the cloud of wishful thinking to the firm land of things that happen. My quest for star replication starts with the most common topology. One master, many slaves.
Fig 1. Master/Slave topology |
Legend |
It looks like a star, with the rays extending from the master to the slaves. This is the basis of most of the replication going on mostly everywhere nowadays, and it has few surprises. Setting aside the …
[Read more]