Showing entries 71 to 80 of 691
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Percona Software (reset)
Importance of Delay Replica in Data Recovery: Recover Deleted Records

What is a Delay Replica and how does it help?

MySQL Replication is useful, easy to set up, and used for very different purposes. For example:

  • split reads and writes
  • run data mining or reporting processes on them
  • disaster recovery

To learn more, check out How Does MySQL Replication Work?

It’s important to mention that a replication server is not a backup by itself. A mistake on the source, for example, a DROP DATABASE or an UPDATE without a WHERE clause will be replicated nearly instantly to all replica servers. Simply having a replica is not going to be helpful here. How can we avoid that kind of mistake? By having a replica server that intentionally lags behind.

We can never avoid human error in database infrastructure operations. But …

[Read more]
Percona’s MySQL Audit Log Plugin – An Enterprise Feature at a Community Price

The need to audit who is using a database and their activities is crucial in many environments. This functionality is available in the free, open source Audit Plugin that is included in Percona Server for MySQL.

The prerequisite for installing Percona’s Audit Log Plugin is to first install Percona Server for MySQL. Percona Server for MySQL is a drop-in replacement for MySQL.

The Audit Log Plugin itself is easy to install requiring only a shared object library to be loaded from a MySQL client program with INSTALL PLUGIN audit_log SONAME ‘audit_log.so’;.

stoker@testbox:~/Downloads$ mysql -u root -p mysql
Enter password:
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or g.
Your MySQL …
[Read more]
MySQL Data Archival With Minimal Disruption

We all know that data is important, and some businesses need historical data to be available all the time. The problem is that queries on large tables perform poorly if they are not properly optimized. We get many customer requests in Managed Services to purge/archive large tables, and to achieve it, we use pt-archiver.

Recently, we received a request to archive a large table, and the customer was worried about the downtime and performance issues during the archival.

We proposed a solution to the customer to archive the table using pt-archive. The idea is to archive old data to other tables and keep the latest data on the current table with minimal performance issues. All of the data will remain available and can be queried anytime.

In the blog, I will …

[Read more]
PMM, Federated Tables, Table Stats, and Lots of Connections!

Earlier in the year, I was working on an issue where one of my clients had reported a massive influx in connection on their hosts after enabling Percona Monitoring and Management (PMM). This was something I had not seen before and after researching for a couple of days I discovered that if you monitor a MySQL instance with PMM configured to collect table statistics, and if the tables that it’s gathering statistics from are Federated, it will generate a connection on the remote host for the Federated tables, one for each Federated table in the instance. Let’s go over the details and provide some examples so we can understand this a bit better.

First, I’ll offer a reminder that a Federated table is simply a table that you can put in your MySQL instance that is empty locally and uses a network connection to get the data from …

[Read more]
Online DDL Tools and Metadata Locks

One thing I commonly hear when working with my clients is “I want to change my DDL strategy in order to avoid locking in my database! The last time I used the same old method I ended up in a metadata lock situation!”

I agree that metadata locks can be painful, but unfortunately, it’s completely unavoidable, and changing from one tool to another won’t help with this. That said, it’s still worth it to examine how metadata locks work and what the impact is for each of the common tools and processes. In doing so we will see that all these tools will require metadata locks, but knowing more about how they work and how the use locking can help us determine the right tool for your specific use case.

Any time you make a change to a table a metadata lock is needed to ensure consistency between the table itself and MySQL’s data dictionary. In order for MySQL to establish this lock it has to wait for any query against the table in …

[Read more]
Making Your MySQL Backup Process up to 17X Faster – Introducing Percona XtraBackup Smart Memory Estimation

Taking a MySQL backup using Percona XtraBackup (PXB) consists of basically two steps: 1) take the backup and 2) prepare the backup.

Briefly speaking, taking a backup means that PXB will copy all of the files from your instance and transfer them to another location. While it does the copy, it spawns a thread that will monitor the InnoDB redo log (WAL/transaction log) and store a copy of all the new redo log entries generated by the server during the backup.

Before restoring the backup into a new instance, users have to prepare the backup. This operation is the same as the crash recovery steps that the MySQL server does after a server crash.

It consists of reading all the redo log entries into memory, categorizing them by space id and page id, reading the relevant pages into memory, and checking the LSN number on the page and on …

[Read more]
How To Get Your Backup to Half of Its Size – Introducing ZSTD Support in Percona XtraBackup

Having a backup of your database is like insurance, you have to pay a monthly price to ensure you have a service available when you need to. When talking about backups, the storage required to keep your backups is what comes into factor when talking about price, the bigger your backup, or the bigger the retention period, the more it will cost.

Compressing your backups is a common practice to reduce this cost. Currently, Percona XtraBackup (PXB) has support for two compression algorithms: quicklz (which is an abandoned project and will soon be deprecated in PXB) and LZ4.

Today we are glad to introduce support for a new compression algorithm in Percona XtraBackup 8.0.30 – …

[Read more]
ProxySQL Support for MySQL caching_sha2_password

Every day we use dozens if not hundreds of applications connecting to some kind of data repository. This simple step is normally executed over the network and, given so, it is subject to possible sniffing with all the possible related consequences.

Given that, it is normally better to protect your connection using data encryption like SSL, or at the minimum, make the information you pass to connect less easy to be intercepted.

At the same time, it is a best practice to not store connection credentials in clear text, not even inside a table in your database. Doing that is the equivalent of writing your password on a sticky note on your desk. Not a good idea.

Instead, the main options are either transforming the passwords to be less identifiable via hashing or storing the information in an external centralized vault.

In MySQL, the passwords are transformed to not be clear text, and several different plugins …

[Read more]
MySQL: Using UNION, INTERSECT, & EXCEPT

MySQL 8.0.31 added INTERSECT and EXCEPT to augment the long-lived UNION operator. That is the good news. The bad news is that you have to be careful using the EXCEPT operator as there is a trick.

Let’s start with some simple tables and load some simple data.

SQL > create table a (id int, nbr int);
Query OK, 0 rows affected (0.0180 sec)
SQL > create table b (id int, nbr int);
Query OK, 0 rows affected (0.0199 sec)
SQL > insert into a (id,nbr) values (1,10),(3,30),(5,50),(7,70);
Query OK, 4 rows affected (0.0076 sec)

Records: 4  Duplicates: 0  Warnings: 0
SQL > insert into b (id,nbr) values (1,10),(2,20),(3,30),(4,40);
Query OK, 4 rows affected (0.0159 sec)

Records: 4  Duplicates: 0  Warnings: 0

So each table has four rows of data with two rows – (1,10) and (3,30) – appearing in both.

SQL > select id,nbr from a;
+----+-----+
| id | nbr |
+----+-----+
|  1 |  10 |
|  3 |  30 |
|  5 |  50 |
|  7 |  70 | …
[Read more]
pt-archiver Misbehaving With Secondary Index

Not long ago, we found a long-running query in one of our clients on Managed Services. While reviewing this long-running query, we found it was generated by a pt-archiver operation. That doesn’t sound good, right?

I was able to reproduce the issue on my test environment. In this article, I will show you how to use pt-archiver to purge data from a table using a secondary index. First, we need to have the full picture before going any deeper. Here is a representation of the table structure:

Create Table: CREATE TABLE `sbtest1` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `k` int(11) NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB AUTO_INCREMENT=4000001 DEFAULT …
[Read more]
Showing entries 71 to 80 of 691
« 10 Newer Entries | 10 Older Entries »