After upgrading some of our slaves to latest 5.7, I have
found what looks like a serious regression introduced in
MySQL 5.7.
A couple weeks ago I noticed that the error log file of one of
our clusters, where I had implemented my in place transparent compression of binary
logs, was literally flooded by the following
error:
[ERROR] Binlog has bad magic number; It's not a binary log
file that can be used by this version of MySQL
In the above setup this is an harmless error, and it should
only happen at server startup, where mysqld opens and
reads all available binary log files. The error is due to
the fact that since files are now compressed, mysqld
doesn't recognize them as valid - not an issue, as only older
files are compressed, and only after …
Here's something that has puzzled me for several weeks.
Right after migrating MySQL from 5.6 to 5.7, we started
experiencing random xtrabackup failures on some, but not all, of
our slaves.
The failures were only happening when taking an incremental
backup, and it would always fail on the same table on each slave,
with errors similar to the following:
171106 13:00:33 [01] Streaming ./gls/C_GLS_IDS_AUX.ibd
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for the remaining bytes.
InnoDB: 262144 bytes should have been read. Only 114688 bytes
read. Retrying for …
Long time no post.... :-)
Here's something interesting.
Last week I decided to give MySQL 5.7 a try (yes, I am kinda
conservative DBA...) and the very same day that I installed my
first 5.7 replica I noticed that, after changing my own password
on the 5.6 master, I could no longer connect to the 5.7
slave.
Very annoying, to say the least! So I went and dug out the root
password (which we do not normally use) and when I connected to
the slave I was surprised to see that my password's hash on the
5.7 slave was different than the hash on the 5.6 master. No
wonder I couldn't connect....
A bit of research on the MySQL documentation and I understood
that 5.7 introduced few changes around the way you work with
users' passwords. SET PASSWORD is now deprecated in favour
of ALTER USER: see MySQL 5.7 Reference Manual …
Sometimes MySQL surprises you in ways you would have never
imagined.
Would you think that the order in which the indexes appear in a
table matters?
It does. Mind you, not the order of the columns - the order of
the indexes.
MySQL optimizer can, in specific circumstances, take different
paths, sometimes with nefarious effects.
Please consider the following table:
CREATE TABLE `mypartitionedtable ` (
`HASH_ID` char(64) NOT NULL,
`RAW_DATA` mediumblob NOT NULL,
`EXPIRE_DATE` timestamp NOT NULL DEFAULT
CURRENT_TIMESTAMP,
KEY `EXPIRE_DATE_IX` (`EXPIRE_DATE`),
KEY `HASH_ID_IX` (`HASH_ID`)
) ENGINE=TokuDB DEFAULT CHARSET=latin1
ROW_FORMAT=TOKUDB_UNCOMPRESSED
/*!50100 PARTITION BY RANGE (UNIX_TIMESTAMP(EXPIRE_DATE))
(PARTITION p2005 VALUES LESS THAN (1487847600) ENGINE = …
If you are using Percona XtraBackup with
xbcrypt
to create encrypted backups, and are using versions older than 2.3.6 or 2.4.5, we advise that you upgrade Percona XtraBackup.
Note: this does not affect encryption …
[Read more]
The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369
This test is very simple. I loaded the SSB (star schema
benchmark) data for scale factor 20 (12GB raw data), added
indexes, and tried to count the rows in the table.
After loading data and creating indexes, the .rocksdb data
directory is 17GB in size.
A full table scan "count(*)" query takes less than four minutes,
sometimes reading over 1M rows per second, but when scanning the
index to accomplish the same count, the database can only scan
around 2000 rows per second. The four minute query would take an
estimated 1000 minutes, a 250x difference.
I have eliminated the type of CRC32 function (SSE vs non-SSE) by
forcing the hardware SSE function by patching the code.
There seem to be problems with any queries …
The details on this issue are here:
https://github.com/facebook/mysql-5.6/issues/369
This test is very simple. I loaded the SSB (star schema
benchmark) data for scale factor 20 (12GB raw data), added
indexes, and tried to count the rows in the table.
After loading data and creating indexes, the .rocksdb data
directory is 17GB in size.
A full table scan "count(*)" query takes less than four minutes,
sometimes reading over 1M rows per second, but when scanning the
index to accomplish the same count, the database can only scan
around 2000 rows per second. The four minute query would take an
estimated 1000 minutes, a 250x difference.
I have eliminated the type of CRC32 function (SSE vs non-SSE) by
forcing the hardware SSE function by patching the code.
There seem to be problems with any queries …
One of the main tasks that any DBA has to deal with is certainly
data retention. Especially when your developers like to store
logging and tracking information inside the database. This is not
something that I would recommend (don't try it at home!) but when
you're late to the party (read: someone had taken this path
before you joined) , then you'll have to live with it.
Data retention in MySQL is usually applied by partitioning the
table by RANGE on a selected column, usually containing the row
creation timestamp. New partitions are created in advance for the
coming days or weeks or months, and a script is used that will
run nightly and drop partitions that are older than a set
retention.
There are many tools that can be used to automate data retention;
I have chosen the excellent pdb-parted, a nice and cozy perl
script that you can find in the PalominoDB repository on GitHub
(since PalominoDB is no longer in existence, …
If there is something that I love about information technology is
the incredible amount of unpredictable ways that bugs can bite
you where and when you less expect it.
This one I'm about to describe is a serious one, yet there has
been a very quiet response from Percona bug team at Launchpad,
where I first submitted it (because we use Percona server here).
If you have read my other posts you know by now that I am a
laz... err, busy guy so bear with me if only today I have
verified that this is, in fact, an upstream bug and have raised a
bug to Oracle too.
It affects 5.6.33, latest version at the time of this
article.
Anyways, the bug started to manifest itself in the form of
complete stalls of our main cluster, which was happening at
different times and apparently in a way that was unrelated to the
amount of traffic hitting the master. When stalling, system
CPU time was topping 100% of total available …
A question which would come sometimes to mind when starting with
MySQL is whether I should use DATETIME or TIMESTAMP data type
since both appear to store same date and time component.
Similarities between datetime and timestamp:
1. Values contain both date and time parts.
2. Format of retrieval and display is "YYYY-MM-DD
HH:MM:SS".
3. Can include a trailing fractional seconds part in up to
microseconds (6 digits) precision.
4. With the fractional part included, the format for these values
is "YYYY-MM-DD HH:MM:SS[.fraction]".
5. Both the TIMESTAMP and (as of MySQL 5.6.5) DATETIME offer
automatic initialization and updating to the current …