Showing entries 11 to 20 of 24
« 10 Newer Entries | 4 Older Entries »
Displaying posts with tag: Ideas (reset)
Few more ideas for InnoDB features

As you see MySQL is doing great in InnoDB performance improvements, so we decided to concentrate more on additional InnoDB features, which will make difference.

Beside ideas I put before http://www.mysqlperformanceblog.com/2009/03/30/my-hot-list-for-next-innodb-features/ (and one of them - moving InnoDB tables between servers are currently under development), we have few mores:

- Stick some InnoDB tables / indexes in buffer pool, or set priority for InnoDB tables. That means tables with bigger priority will be have more chances to stay in buffer pool then tables with lower priority. Link to blueprint https://blueprints.launchpad.net/percona-patches/+spec/lru-priority-patch

- Separate LRU list into several …

[Read more]
Adjusting Innodb for Memory resident workload

As larger and larger amount of memory become common (512GB is something you can fit into relatively commodity server this day) many customers select to build their application so all or most of their database (frequently Innodb) fits into memory.

If all tables fit in Innodb buffer pool the performance for reads will be quite good however writes will still suffer because Innodb will do a lot of random IO during fuzzy checkpoint operation which often will become bottleneck. This problem makes some customers not concerned with persistence run Innodb of ram drive

In fact with relatively simple changes Innodb could be made to perform much better for memory resident workloads and we should consider fixing these issues for XTRADB.

Preload It is possible to preload all innodb tables (ibdata, .ibd files) on the system start - this would avoid warmup problem and also make crash recovery fast even with very large …

[Read more]
High-Performance Click Analysis with MySQL

We have a lot of customers who do click analysis, site analytics, search engine marketing, online advertising, user behavior analysis, and many similar types of work.  The first thing these have in common is that they're generally some kind of loggable event.

The next characteristic of a lot of these systems (real or planned) is the desire for "real-time" analysis.  Our customers often want their systems to provide the freshest data to their own clients, with no delays.

Finally, the analysis is usually multi-dimensional.  The typical user wants to be able to generate summaries and reports in many different ways on demand, often to support the functionality of the application as well as to provide reports to their clients.  Clicks by day, by customer, top ads by clicks, top ads by click-through ratio, and so on for dozens of different types of slicing and dicing.

And as a result, one of the most common …

[Read more]
SHOW OPEN TABLES - what is in your table cache

One command, which few people realize exists is SHOW OPEN TABLES - it allows you to examine what tables do you have open right now:

PLAIN TEXT SQL:

  1. mysql> SHOW open TABLES FROM test;
  2. +----------+-------+--------+-------------+
  3. | DATABASE | TABLE | In_use | Name_locked |
  4. +----------+-------+--------+-------------+
  5. | test     | a     |      3 |           0 |
  6. +----------+-------+--------+-------------+
  7. 1 row IN SET (0.00 sec)

This command lists all non-temporary tables in the table-cache, showing each of them only once (even if table is opened more than ones)

In_use show how many threads are …

[Read more]
Thoughs on Innodb Incremental Backups

For normal Innodb "hot" backups we use LVM or other snapshot based technologies with pretty good success. However having incremental backups remain the problem.

First why do you need incremental backups at all ? Why not just take the full backups daily. The answer is space - if you want to keep several generations to be able to restore to, having huge amount of full copies of large database is not efficient. Especially if it only changes couple of percents per day.

The solution MySQL offers - using binary log works in theory but it is not overly useful in practice because it may take way too long to catch up using binary log. Even if you have very light updates and can execute updates for a full day within an hour it will take over 24 hours to cover month worth of binary logs... and quite typically you would have much higher update traffic.

Another solution is …

[Read more]
Living with backups

Everyone does backups. Usually it’s some nightly batch job that just dumps all MySQL tables into a text file or ordinarily copies the binary files from the data directory to a safe location. Obviously both ways involve much more complex operations than it would seem by my last sentence, but it is not important right now. Either way the data is out and ready to save someone’s life (or job at least). Unfortunately taking backup does not come free of any cost. On the contrary, it’s more like doing very heavy queries against each table in the database when mysqldump is used or reading a lot of data when copying physical files, so the price may actually be rather high. And the more effectively the server resources are utilized, the more that becomes a problem.

What happens when you try to get all the data?

The most obvious answer is that it needs to be read, through I/O requests, from a storage that it resides on. The storage is …

[Read more]
32-bit? Really?


Is anyone out there actually still using 32-bit systems for new deployments? On purpose?

I know I occasionally see people who have 64-bit systems and have installed 32-bit OS on them. They are one of two things: people who don’t know what they are doing, or why their server is then having memory problems, or people who have 32-bit Linux installed on their laptops because there is no good 64-bit Flash Player plugin for Linux. (/me shoots Adobe in the Face… it’s called re-compile it and release, please)

The 32-bit laptop people I don’t care about - they are not yet hosting websites on their laptops while browsing YouTube. Yet.

The others just need the learning.

Which brings me back to… should we start to consider 32-bit a dinosaur sort of like AIX 4.1?

(I should be clear here… I am honestly asking… not just trolling. I’m also not advocating bad code - see previous …

[Read more]
A proposal for method of delivering optimizer bug fixes

Working on query optimizer bugs can be a rather frustrating experience. First, as soon as some query doesn't run as fast it theoretically could people will consider it a bug. On one hand that's great, you get a constant stream of user input, but on the other hand you end up with a whole pile of "bugs" which you can't hope to finish.

What's more frustrating is that even if you manage to create a fix for an optimizer bug, there are chances it won't be allowed into next GA (currently 5.0.70) or approaching-GA (currently 5.1.30) release (GA is our term for "stable" or "release").

The reason behind this is that most optimizer bugfixes cause the optimizer to pick different query plans, and there's no way to guarantee that the fix will be a change for the better for absolutely everyone. Experience shows that it is possible to have a query that hits two optimizer bugs/deficiencies at once in such a way that they cancel each other out, and …

[Read more]
Pluggable storage engine interface needs to support table name resolution hooks

I've started some attempts at coding ha_trace storage engine I've mentioned earlier. One of the first things that became apparent was that I needed a way to put a hook into table name resolution code so I can wrap tables into ha_trace objects.

The need to intercept table name resolution and do something other than looking at the .frm files is not unique to call trace engine:

  • Remote storage engines would benefit also:
    • NDB has a whole chunk of code that ships .frm files from one connected mysqld instance to another. It doesn't hook into name resolution; it ships table definitions proactively, which could be nuisance if you use a new mysqld node to just connect and run a few queries
[Read more]
erlang and MySQL Cluster


Ok, in case you just showed up going “finally!”, I’m sorry to let you down - I haven’t yet ported NDB API to erlang.

But I should - and I want to.

Brian was just talking about concurrent program and mentioned erlang. Turns out that when I was starting off working on the NDB/Connectors, Elliot asked me if I’d considered erlang. Always up for learning a new language I did a quick check, but there were no swig bindings, so I put it off until later.

Then later came and I still hadn’t written any code, so I found a book online and started reading. I have to say erlang is very cool.

There is no way on earth I can wrap the NDB API in any meaningful way using erlang. However, I might be able to reimplement the wire protocol in erlang and have the resulting thing be way more stable and scalable. Thing is - it really made me …

[Read more]
Showing entries 11 to 20 of 24
« 10 Newer Entries | 4 Older Entries »