When you think about IOPS, you probably think about writes because MySQL write I/O has a long tradition of optimization, benchmarking, new algorithms, new storage engines, and so forth. There’s no shortage of material on MySQL write I/O; just two examples from Percona are Scaling IO-Bound Workloads for MySQL in the Cloud and Tuning MySQL/InnoDB Flushing for a Write-Intensive Workload. But in this short blog post I highlight two other, less common aspects of MySQL I/O: reads and surprises.
When you think about IOPS, you probably think about writes because MySQL write I/O has a long tradition of optimization, benchmarking, new algorithms, new storage engines, and so forth. There’s no shortage of material on MySQL write I/O; just two examples from Percona are Scaling IO-Bound Workloads for MySQL in the Cloud and Tuning MySQL/InnoDB Flushing for a Write-Intensive Workload. But in this short blog post I highlight two other, less common aspects of MySQL I/O: reads and surprises.
When you think about IOPS, you probably think about writes because MySQL write I/O has a long tradition of optimization, benchmarking, new algorithms, new storage engines, and so forth. There’s no shortage of material on MySQL write I/O; just two examples from Percona are Scaling IO-Bound Workloads for MySQL in the Cloud and Tuning MySQL/InnoDB Flushing for a Write-Intensive Workload. But in this short blog post I highlight two other, less common aspects of MySQL I/O: reads and surprises.
This post is a sister post to MySQL Master Replication Crash Safety Part #5: making things faster without reducing durability. There is no introduction or conclusion to this post, only landing sections: reading this post without its context is not not recommended. You should start with the main post and come back here for more details.
And this Part #5 of the series has many sub-parts. So far,
This post is a sister post to MySQL Master Replication Crash Safety Part #4: benchmarks of high and low durability. There are no introduction or conclusion to this post, only landing sections: reading this post without its context is not recommended. You should start with the main post and come back here for more details.
Environment
My benchmark environment is composed of three vms in
In the InnoDB plugin, a new variable was added named innodb_io_capacity, which controls the maximum
number of I/O operations per second that InnoDB will perform
(which includes the flushing rate of dirty pages as well as the
insert buffer (ibuf) batch size).
First off, let me just say this is a welcome addition (an addition provided by the
Google Team, fwiw).
However, before this was configurable, the internal hard-coded
value for this was 100. But when this became configurable, the
default was increased to 200.
For many systems, this is not an issue (i.e., the overall system
can perform 200 IOPS).
However, there are still many disks (which is often the
bottleneck) out there that are …
At Kscope this year, I attended a half day in-depth session entitled Data Warehousing Performance Best Practices, given by Maria Colgan of Oracle. In that session, there was a section on how to determine I/O throughput for a system, because in data warehousing I/O per second (iops) is less important than I/O throughput (how much actual data goes through, not just how many reads/writes).
The section contained an Oracle-specific in-database tool, and a
standalone tool that can be used on many operating systems,
regardless of whether or not a database exists:
If Oracle is installed, run
DBMS_RESOURCE_MANAGER.CALIBRATE_IO
:
SET SERVEROUTPUT ON DECLARE lat INTEGER; iops INTEGER; mbps INTEGER; BEGIN -- DBMS_RESOURCE_MANAGER.CALIBRATE_IO(<DISKS>, <MAX_LATENCY>,iops,mbps,lat); DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat); …[Read more]