Showing entries 101 to 110 of 128
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: benchmark (reset)
vmplot.sh, a useful tool for MySQL performance tuning

I don’t know if it is because of my science background, I am a physicist, I do like graphs, especially when I do performance tuning. With UNIX like operating systems, the vmstat command give you an easy way to grab many essential performance counters but, generating graphs from vmstat output with tools like OpenOffice Calc is time consuming and not very efficient. In order to solve this, I wrote a few scripts using gnuplot but they are not very easy to work with. Then, doing some benchmarks with DBT2, I found the vmplot.sh script and… I like that one. I just hacked it little bit to make it keeps the graph on screen, adding the “-persist” parameters to the gnuplot invocations. The script will produce 7 graphs that will be displayed on screen and save in png format in /tmp. The graphs it produces are the following:

  • CPU: graphs idle, user, sys and wait time
[Read more]
Testing MYSQL on the Violin Memory Flash 1010 Part III:

So we have already looked at sysbench & dbt2 tests… now we have to look at the new Juice DB benchmark. Juice runs a series of queries generate its load, these queries are combined into a workload. I tested the v1010 with a mixed workload ( mix of short & long updates and selects ), a mixed simple workload ( mix of short running updates and selects ) , and a read only ( selects which are designed to hit the disk ) . Because this is still an evolving benchmark I am including results from an Intel MLC drive (note these boxes are vastly different).  Keep in mind this is not a completely fair comparison. The Intel drive is not the enterprise class drive, but even with the SLC drive I don’t think its a fair comparison. The price difference between these two solutions is ~$50/GB -vs- ~$12.5GB.

The setup for this test created about a 20GB database, with each of the 3 large tables coming in around 6 GB each. I tested primarily with a …

[Read more]
Playing with Waffle & Mecached 1.3.2

So I got our Waffle specific code over into Memcached 1.3.2 last night…  not 100% sure why at this point but I am seeing a huge change in the the hit rate between 1.2.5 and 1.3.2.  Take a look:

Old 1.2.5:

10290.89 new-order transactions per minute (NOTPM)


Server: 192.168.2.105 (11211)
pid: 16522
uptime: 2190
time: 1233504281
version: 1.2.5
pointer_size: 32
rusage_user: 23.753484
rusage_system: 127.611975
curr_items: 46800
total_items: 3487050
bytes: 769596712
curr_connections: 20
total_connections: 23
connection_structures: 21
cmd_get: 1752218
cmd_set: 1801081
get_hits: 1685969
get_misses: 66249
evictions: 1680669
bytes_read: 769596712
bytes_written: 769596712
limit_maxbytes: 805306368
threads: 1

New (1.3.2):
6695.67 new-order transactions per minute (NOTPM)

Server: localhost (11211)
         pid: 9778
         uptime: 2087
         time: 1238184346
         version: 1.3.2
         pointer_size: …
[Read more]
Testing MYSQL on the Violin Memory Flash 1010 Part II:

Continuing my series on the Violin Memory 1010 I am turning my attention to the DBT2 benchmark which simulates an OLTP workload. I started with my typical “waffle” workload which is a 20 warehouse setup ( about 2.5 GB ) with a 768M buffer pool and I compared it to a 5G buffer pool with the same setup.  The ultimate goal or the nirvana state of any system is to have the performance of the storage system be as fast as having everything all in memory. The closer we can get the better off we are. The sad thing is even with the fastest of flash solutions we see times in the 70-300 microsecond response time range,  which is very  far off the nano second response time delivered by memory. That being said lets see how close we can get to a fully cached database:



I am including the Intel #’s for perspective here and to show just how close we can get full in memory speeds. The fact is I am comparing a potentially …

[Read more]
Real Time Data Warehousing Presentation and Video

At the March Boston MySQL User Group meeting, Jacob Nikom of MIT’s Lincoln Laboratory presented “Optimizing Concurrent Storage and Retrieval Operations for Real-Time Surveillance Applications.” In the middle of the talk, Jacob said he sometimes calls what he did in this application as “real-time data warehousing”, which was so accurate I decided to give that title to this blog post.

The slides can be downloaded in PDF format (1.3 Mb) at http://www.technocation.org/files/doc/Concurrent_database_performance_02.pdf. The 54 minute video can be downloaded (644Mb) at http://technocation.org/node/693/download or streamed directly in your browser at http://technocation.org/node/693/play. …

[Read more]
Testing MYSQL on the Violin Memory Flash 1010 Part I:

Continuing my series of in depth looks at flash appliances, sans, and drives I spent a few weeks test driving the Violin Memory flash ( and DDR based ) solutions. Just from the specs the Violin Memory 1010 is impressive. According to the site the v1010 does 300K random reads per second and 200K random writes and has latency of less then 300 microseconds! That is pretty impressive!  But as I have stated before its difficult to test these limits with our current set of benchmarks.   For my test’s I did run this through the  ysbench fileio tests and dbt2 to get a feel for performance, but I was really eager to test the new juice db benchmark to really drive IO.  For the test Violin generously made available a 4 core (3.4Ghz ) server with 8GB of memory with access to a 360GB DDR based v1010 and then a 320GB DDR based v1010. Unlike the Ramsan I tested a …

[Read more]
MySQL 5.x performance with logging

There has been much talking about MySQL performance related to logging. Since MySQL 5.1.21, when Bug #30414 was reported (Slowdown (related to logging) in 5.1.21 vs. 5.1.20) I have been monitoring the performance of the server, both on 5.0 and 5.1.
Recently, I got a very powerful server, which makes these measurements meaningful.
Thus, I measured the performance of the server, using all publicly available sources, because I want this benchmark to be repeatable by everyone.
I will first describe the method used for the benchmarks, and then I report the results.The serverThe server is a Linux Red Hat Enterprise 5.2, running on a 8core processor, with 32 GB RAM and 1.5 TB storage.


$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.2 (Tikanga)

$ cat /proc/cpuinfo |grep "processor\|model name" | sort …
[Read more]
Juicey: New Juice Benchmark Output

Pushed about a ton of changes to the Juice benchmark since I announced it… lots of little fixes and a few big ones… really their are too many to talk about, and a lot of the new stuff is boring ( who cares about adding ramp up time, its just not that interesting to talk about ).

The big change is all about the Analyze.pl script. I am adding a ton to this script in order to try and give a concise and clear picture of what happened during the benchmark run. I am excited about it ( I am a geek though ) so I thought I would share what it looks like so far:

Total Test Runtime = 513.110480070114 seconds, limiting results to 300 seconds however
QNum:      4 ... QCount:      9 ... QTime:   0.028973 ... Max:   0.037164 ... FlatTime:   0.030530  ... Min5%:   0.015769  ... Max5%:   0.037164
QNum:      7 ... QCount:   2900 ... QTime:   0.001224 ... Max:   0.027039 ... FlatTime:   0.000658  ... Min5%:   0.000134  ... Max5%:   0.012549
QNum:      8 ... …
[Read more]
How much does it cost to update an index?

I was asked today about what is the cost of adding an index on a frequently updated column ( like a timestamp, count, or weight )… typically my answer is it depends. But for this question it was narrowed down to a specific case. An update on a secondary index based on a PK lookup. I decided to try and give an exact answer. I hacked the Juice DB Benchmark to attack my medium sized table ( which magically already had a count column in it ). I then cranked up the test. A few more details Query 23 updated a column without an index, queries 21,23,24 updated the d_count column. query 21 adds 5 to the count, query 22 adds 150, query 24 subtracts 1…. here are the results:

With a solo index on d_count:

Run Number:  86  threads:  8 Length :  340 LoadType: upd
Total Test Runtime = 375.245010137558 seconds, limiting results to 300 seconds however
QNum:     21 ... QCount:  78448 ... QTime:   0.003985 ... Max:   0.095937 ... FlatTime:   0.003673 …
[Read more]
Settingup DBT-2

DBT-2 is a TPC-C like OLTP benchmark, and very popular amongst many MySQL users. It is used by MySQL QA team to test the stability and performance before release. However, steps to setup DBT-2 is a little bit messy, and its README files include some dummy information. So I introduce you these steps below:

1. Download it!

You can download the source code from here: http://osdldbt.sourceforge.net/

2. Required packages

The following perl packages are required to build DBT-2. Unfortunately, configure script doesn't complain even if they are missing. Install them using, e.g. CPAN.

shell> sudo cpan Statistics::Descriptive
shell> sudo cpan Test::Parser
shell> sudo cpan Test::Reporter

If you want to make a graph from the output, you have to install gnuplot in advance. e.g. Ubuntu …

[Read more]
Showing entries 101 to 110 of 128
« 10 Newer Entries | 10 Older Entries »