When InnoDB compresses a page it needs the result to fit into its
predetermined compressed page size (specified with
KEY_BLOCK_SIZE). When the result does not fit we call that a
compression failure. In this case InnoDB needs to split up the
page and try to compress again. That said, compression failures
are bad for performance and should be minimized.
Whether the result of the compression will fit largely depends on
the data being compressed and some tables and/or indexes may
contain more compressible data than others. And so it would be
nice if the compression failure rate, along with other
compression stats, could be monitored on a per table or even on a
per index basis, wouldn't it?
This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks
in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this
helpful information. It contains the following fields:
When preparing for the the IOUG Collaborate 12 deep dive on deploying Oracle Databases for high Availability, I wanted to provide some feedback on what hardware components are failing most frequently and which ones are less frequently. I believe I have reasonably good idea about that but I thought that providing some more objective data would be better. I couldn’t find and results of a more scientific research so I decided to organize a poll. This blog post shows the results and I promised to share it with several groups.
The results are also in the presentation material but it might be hidden deep into 100+ slides so here is the dedicated blog with some comments on the …
[Read more]Last night my residential area lost power for about 2 hours, between 2-4 am. This reminded me of something, and there’s analogies to MySQL infrastructure. Power companies have over recent years invested a lot of money in making the supply more reliable. But, it does fail occasionally still.
From my perspective, the question becomes: is it worth the additional investment for the power companies? Those extra few decimal points in reliability come at a very high cost, and still things can go wrong. So a household (or business) that relies on continuity has to put other measures in place anyway. If the power company has an obligation to deliver to certain standards, it might be more economical for them to provide suitable equipment (UPS, small generator) to these households and business (for free!) and the resulting setup would provide actual continuity rather than merely higher reliability with occasional failures. Everybody …
[Read more]
Please make it descriptive, graphic, and if anything burnt or
exploded I'd love to have pictures.
Include an approximate timeline of when things happened and when
it was all working again (if ever).
Thanks!
This somewhat relates to the earlier post A
SAN is a single point-of-failure, too. Somehow people get
into scenarios where highly virtualised environments with SANs
get things like replication and everything, but it all runs on
the same hardware and SAN backend. So if this admittedly very
nice hardware fails (and it will!), the degree of "we're stuffed"
is particularly high. The reliance in terms of business processes
is possibly a key factor there, rather than purely technical
issues.
Anyway, if you have good stories of (distributed?) SAN and VM
infra failure, please step up and tell all. It'll help prevent
similar issues for …