Showing entries 111 to 120 of 149
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Tech (reset)
Always Test with Real Data

As I previously noted, I’m in the midst of converting some data (roughly 2 billion records) into documents that will live in a MongoDB cluster. And any time you move data into a new data store, you have to be mindful of any limitations or bottlenecks you might encounter (since all systems have had to make compromises of some sort or another).

In MySQL one of the biggest compromises we make is deciding what indexes really need to be created. It’s great to have data all indexed when you’re searching it, but not so great when you’re adding and deleting many rows.

In MongoDB, the thing that gets me is the document size limit. Currently an object stored in MongoDB cannot be larger than 4MB (though that’s likely to be raised soon). Now, you can build your own MongoDB binaries and tweak that parameter, but I’ve been …

[Read more]
MySQL 5.5.4-m3 in Production

Back in April I wrote that MySQL 5.5.4 is Very Exciting and couldn’t wait to start running it in production. Now here we are several months later and are using 5.5.4-m3 on all the slaves in what is arguably our most visible (and one of the busiest) user-facing cluster. Along the way we deployed some new hardware (Fusion-IO) but not a complete replacement. Some boxes are Fusion-io, some local RAID, and some SAN.  We have too many eggs for any one basket.

We also converted table to the Barracuda format in InnoDB, dropped an index or two, converted some important columns to BIGINT UNSIGNED and enabled 2:1 compression for the table that has big chunks of text in it. Aside from a few false starts with the Barracuda conversion and compression, things went pretty well. Coming from 5.0 (skipping 5.1 entirely) we had some my.cnf work to do to …

[Read more]
Database Drama

There’s been a surprising amount of drama (in some circles, at least) about database technology recently.  I shouldn’t be surprised, given the volume of reactions to the I Want a New Datastore post that I wrote. (Hint: I still hear from folks pitching the newest data storage systems.)

The two things that caught my eye recently involve Cassandra and MongoDB (and, indirectly, MySQL). First was what I read as a poorly thought out and whiny critique of MongoDB’s durability model: MongoDB Performance & Durability. Just because something is the default doesn’t mean you have to use it that way. Thankfully there was reasoned discussion and reaction elsewhere, including the …

[Read more]
Introduction to memcached

These are the slides to a talk I did earlier this week for students of the professional bachelor in ICT course at KaHo St. Lieven. I wanted to give a clear and simple introduction to the memcached service, as I think it’s an invaluable tool in today’s web development.

MongoDB Early Impressions

I’ve been doing some prototyping work to see how suitable MongoDB is for replacing a small (in number, not size) cluster of MySQL servers. The motivation for looking at MongoDB in this role is that we need a flexible and reliable document store that can handle sharding, a small but predictable write volume (1.5 – 2.0 million new documents daily), light indexing, and map/reduce operations for heavier batch queries. Queries to fetch individual documents aren’t that common–let’s say 100/sec in aggregate at peak times.

What I’ve done so far is to create a set of Perl libraries that abstract away the data I need to store and provide a “backend” interface to which I can plug in a number of modules for talking to different data stores (including some “dummy” ones for testing and debugging). This has helped to clarify some …

[Read more]
My MySQL wishlist (revised)

I wrote about my MySQL wishlist on November 14th 2007 and now it's time for an update. I will copy-paste the old entry. The original text will be in italics.

1. Per user and/or per database quota
Would very useful in setups for shared hosting. This would also prevent one database from bringing down the whole server. Separate tablespaces on different mountpoint can ease the pain, but I consider that a nasty hack.

No update. Still problematic

2. External authentication
I've seen numerous scripts which fetch the authentication info from ldap, a file, another database or some other authentication store. This should be integrated into mysql. The mysql grant tables should be pluggable so it is possible to write a custom authentication plugin. We already have plugable engines and function (UDF) so this shouldn't be that hard …

[Read more]
Using the MySQL Test Suite

Earlier I reported about two crashes related to MySQL 5.0.22 on Ubuntu 6.06 LTS.

I think those bugs show a lack of testing on the side of Cannonical/Ubuntu. And for MySQL there is a quite good test suite available, so it's not rocketsience.

There are multiple reasons why you could use the MySQL Test Framework:
1. Test if bug you previously experienced exists in the version you are using or planning to use.
2. Test if configuration changes have a good or bad result on the stability of mysqld.
3. Test if important functions still return the correct results (especially importand for financial systems)

$ echo "SELECT @@version;" > version.test
$ cp version.test version.result
$ mysql < version.test >> version.result
$ mysqltest --result-file=version.result …

[Read more]
Another Crash in MySQL 5.0.22 on Ubuntu 6.06 LTS

1. Set this variable
thread_stack = 265K

2. Execute this query
mysql> SELECT 0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0
+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+
0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0
+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+
0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0
+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+
0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0
+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+
0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0+0

[Read more]
Crash in MySQL 5.0.22 on Ubuntu 6.06 LTS

I found a new crasher in the MySQL 5.0 version which ships with Ubuntu 6.06 LTS.

> SELECT * FROM (SELECT mu.User FROM mysql.user mu UNION SELECT mu.user FROM mysql.user mu ORDER BY mu.user) a;
ERROR 2013 (HY000): Lost connection to MySQL server during query

The bug report: LP392236

On MySQL 5.0.51 on Debian stable it returns this error (like it should):
ERROR 1054 (42S22): Unknown column 'mu.user' in 'order clause'

The correct query should be like this (Using culumn a number):
> SELECT * FROM (SELECT mu.User FROM mysql.user mu UNION SELECT mu.user FROM mysql.user mu ORDER BY 1) a;

MySQL scanning module for Metasploit

I've created a very simple MySQL scanning module for the metasploit framework.

1. Download the mysql_version file and rename it to mysql_version.rb and put it in the framework-3.2/modules/auxiliary/scanner/mysql directory of your metasploit installation.
http://compukid.no-ip.org/dev/scripts/mysql_version

2. Use using msfcli
./msfcli auxiliary/scanner/mysql/mysql_version RHOSTS=192.168.0.1 E
[*] 192.168.0.1:3306, MySQL server version: 5.0.81-1-log (Protocol 10)

3. More options:
set THREADS to 10 and RHOSTS to 192.168.0.0/24 to scan a whole network.

Showing entries 111 to 120 of 149
« 10 Newer Entries | 10 Older Entries »