Showing entries 191 to 200 of 1055
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Replication (reset)
Binlog Encryption with Percona Server for MySQL

In this blog post, we’ll look at how to turn on binlog encryption in Percona Server for MySQL.

Why do I need this?

As you probably know, Percona Server for MySQL’s binlog contains sensitive information. Replication uses the binlog to copy events between servers. They contain all the information from one server so that it can be applied on another. In other words, if somebody has access to a binlog, it means they have access to all the data in the server. Moreover, said person (or, “Hacker”) could create a clone copy of our server by just making a replica of it. In the end, they have access to our binlog. This shows how important protecting a binlog really is – leakage of binlogs not only make a particular table/tablespace or a group of tables accessible to a hacker, but literally the whole server …

[Read more]
Customizing dbdeployer


As of version 0.2.1, dbdeployer allows users to customize composite sandboxes more than ever. This is done by manipulating the default settings, which are used to deploy the sandbox templates.

In order to appreciate the customization capabilities, let's start with a vanilla deployment, and then we have a look at the possible changes.

$ dbdeployer deploy replication 8.0.4
Installing and starting master
Database installed in $HOME/sandboxes/rsandbox_8_0_4/master
. sandbox server started
Installing and starting slave 1
Database installed in …
[Read more]
This Week in Data with Colin Charles 30: Schedule for Percona Live, and Tracking Those Missing Features

Join Percona Chief Evangelist Colin Charles as he covers happenings, gives pointers and provides musings on the open source database community.

Have you registered for Percona Live already? The tutorial grid, the schedules for day 1 and day 2 are pretty amazing, and there is even an extra track being added, for a total of 10 concurrent/parallel tracks during day 1 & day 2. If you submitted a talk and it didn’t get accepted (competition was high), you should have received a discount code to register for the event.

I plan to write more dedicated blog posts around M|18 and the MariaDB Developer’s …

[Read more]
Using MySQL 8.0: what to expect

MySQL 8.0 will be GA soon (just my assumption: Oracle doesn't tell me anything about its release plans) and it's time to think about having a look at it.
If this is your first try of MySQL 8, get prepared for several impacting differences from previous versions.

In this article I won't tell you what you can do with MySQL 8: there is plenty of material about this, including in this very blog. I will instead concentrate on differences from previous versions that users need to know if they want to avoid surprises.

Data Directory

Let's start with an observation of the data directory.
After a standard installation, without any additional options, I see the following:

Files that I expected to see

auto.cnf
ib_buffer_pool
ib_logfile0
ib_logfile1
ibdata1
ibtmp1
(dir) mysql
(dir) …
[Read more]
Replication handler error HA_ERR_KEY_NOT_FOUND and Memory Tables In Replication.

 Last_SQL_Error: Could not execute <Update/insert/delete>_rows event on table <dbname>.<tablename>; Can’t find record in ‘<tablename>’, Error_code: 1032; handler error HA_ERR_END_OF_FILE; the event’s master log <master binary log file name>, end_log_pos <master log file position>

This is one of the error for MySQL Replication when MASTER and SLAVE replica not in sync. The Reason for replication getting out of sync due to various reasons. But the common cause is particular table row or data not matched/missing on the slave while applying transaction received from MASTER. This transaction can be insert/update/delete type.

In this blog post will discuss support for MEMORY storage engine table with MySQL Replication and few examples with a possible solution. As per official MySQL DOC, …

[Read more]
RDS Aurora MySQL and Service Interruptions

In Amazon space, any EC2 or Service instance can “disappear” at any time.  Depending on which service is affected, the service will be automatically restarted.  In EC2 you can choose whether an interrupted instance will be restarted, or left shutdown.

For an Aurora instance, an interrupted instance is always restarted. Makes sense.

The restart timing, and other consequences during the process, are noted in our post on Aurora Failovers.

Aurora Testing Limitations

As mentioned earlier, we love testing “uncontrolled” failovers.  That is, we want to be able to pull any plug on any service, and see that the environment as a whole continues to do its job.  We can’t do that with Aurora, because we can’t control the essentials:

  • power button;
  • reset switch;
[Read more]
Meet dbdeployer: the new sandbox maker


How it happened

A few years ago I started thinking about refactoring MySQL-Sandbox. I got lots of ideas and a name for the project (dbdeployer) but went no further. The initial idea (this was 2013!) was to rewrite the project in Ruby: I had been using Ruby at work and it looked like a decent replacement for Perl. My main problem was the difficulty of installation in an uncontrolled environment. If you have control over your environment (it's your laptop or you are in charge of the server configuration via Puppet or similar) then the task is easy. But if you ever need to deploy somewhere with little or no notice, it becomes a problem: there are servers where Perl is not installed, and is common that the server also have a policy forbidding all scripting languages from being deployed. …

[Read more]
RDS Aurora MySQL Failover

Right now Aurora only allows a single master, with up to 15 read-only replicas.

Master/Replica Failover

We love testing failure scenarios, however our options for such tests with Aurora are limited (we might get back to that later).  Anyhow, we told the system, through the RDS Aurora dashboard, to do a failover. These were our observations:

Role Change Method

Both master and replica instances are actually restarted (the MySQL uptime resets to 0).

This is quite unusual these days, we can do a fully controlled role change in classic asynchronous replication without a restart (CHANGE MASTER TO …), and Galera doesn’t have read/write roles as such (all instances are technically writers) so it doesn’t need role changes at all.

Failover Timing

Failover between running instances takes about 30 seconds.  This is in line with information provided in the …

[Read more]
Exploring Amazon RDS Aurora: replica writes and cache chilling

Our clients operate on a variety of platforms, and RDS (Amazon Relational Database Service) Aurora has received quite a bit of attention in recent times. On behalf of our clients, we look beyond the marketing, and see what the technical architecture actually delivers.  We will address specific topics in individual posts, this time checking out what the Aurora architecture means for write and caching behaviour (and thus performance).

What is RDS Aurora?

First of all, let’s declare the baseline.  MySQL Aurora is not a completely new RDBMS. It comprises a set of Amazon modifications on top of stock Oracle MySQL 5.6 and 5.7, implementing a different replication mechanism and some other changes/additions.  While we have some information (for instance from the “deep dive” by AWS VP Anurag Gupta), the source code of the Aurora modifications …

[Read more]
Archiving MySQL Tables in ClickHouse

In this blog post, I will talk about archiving MySQL tables in ClickHouse for storage and analytics.

Why Archive?

Hard drives are cheap nowadays, but storing lots of data in MySQL is not practical and can cause all sorts of performance bottlenecks. To name just a few issues:

  1. The larger the table and index, the slower the performance of all operations (both writes and reads)
  2. Backup and restore for terabytes of data is more challenging, and if we need to have redundancy (replication slave, clustering, etc.) we will have to store all the data N times

The answer is archiving old data. Archiving does not necessarily mean that the data will be permanently removed. Instead, the archived data can be placed into long-term storage (i.e., AWS S3) or loaded into a …

[Read more]
Showing entries 191 to 200 of 1055
« 10 Newer Entries | 10 Older Entries »