Adding good content to Twitter can be a pain. I can’t do it during working hours, and I don’t have much time at night. But, the more content you have, the more followers you can gain, and the more your original tweets can be seen (hopefully).…
In two earlier posts, I gave some examples on how to use Perl to send tweets stored in a MySQL database to Twitter, and then how to automatically reply to your retweets with a “thanks”. In this post, I will show you how to automatically download your direct messages from Twitter, store the messages in a MySQL database, and then delete them.…
In this post we will use the MySQL 5.7.7 labs release which has support for JSON documents stored in a special data type. We will be using Connector/Python and show how to get going with updating documents and getting data out.
Required read
If you wonder what this is all about, please check the following great reads from the MySQL Server team:
- JSON Labs Release: Native JSON Data Type and Binary Format
- JSON Labs Release: JSON Functions, Part 2 — Querying …
Join 28,000 others and follow Sean Hull on twitter @hullsean. MySQL slow query on RDS If you run MySQL as your backend datastore is there one thing you can do to improve performance across the application?. Those SQL queries are surely key. And the quickest way to find the culprits is to regularly analyze your […]
Last week we released Connector/Python v2.0 (alpha); today we publish the source on GitHub. Yes, we are using Git internally and are now able to push it out on each release. Previous versions are still available through LaunchPad.
Here is the full process to get Connector/Python installed in a virtual environment. You’ll need Git installed of course.
shell> git clone https://github.com/oracle/mysql-connector-python.git cpy shell> virtualenv ENVCPY shell> source ENVCPY/bin/activate (ENVCPY)shell> cd cpy (ENVCPY)shell> python setup.py install (ENVCPY)shell> python >>> import mysql.connector >>> mysql.connector.__version__ …[Read more]
There may be times when you need to create a new table in MySQL and feed it with data from another database, the Internet or from combined data sources. MS Excel is commonly used as the bridge between those data sources and a target MySQL database because of the simplicity it offers to organize the information to then just dump it into a new MySQL table. Although the last bit sounds trivial, it may actually be a cumbersome step, creating ODBC connections within Excel through Microsoft Query may not help since these are normally created to extract data from MySQL into Excel, not the opposite. What if you could do this in a few clicks from within Excel after making your data ready for export to a MySQL database?
With MySQL for Excel you can do this and this guide will teach you how easy it is.
Problem
You have a query executed by MySQL Connector/Python and would like to show column information nicely on the console.
Solution
Every cursor object has a description property. This can be used to show information about the columns in a result set.
columns = [] maxnamesize = 0 for coldesc in cur.description: coldesc = list(coldesc) coldesc[2:6] = [] columns.append(coldesc) namesize = len(coldesc[0]) if namesize > maxnamesize: maxnamesize = namesize fmt = "{{nr:3}} {{name:{0}}} {{type:12}} {{null}}".format( maxnamesize+1) colnr = 1 for column in columns: (colname, fieldtype, nullok, colflags) = column print(fmt.format( nr=colnr, name=colname, null='NOT …[Read more]
Over time, some options have crept into the MySQL server which
are comma separated lists of options. These include SQL_MODE,
optimizer_switch, optimizer_trace and a few other
variables.
Optimizer_switch is particularly problematic to work with as it
contains many options and is hard to interpret. This is what you
usually see when you examine optimizer_switch:
index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on
As you can see, seeing which option is on or off is rather
difficult. You can use the REPLACE function to make this
easier:
mysql> select replace(@@optimizer_switch, ',','\n')\G …[Read more]
Over time, some options have crept into the MySQL server which
are comma separated lists of options. These include SQL_MODE,
optimizer_switch, optimizer_trace and a few other
variables.
Optimizer_switch is particularly problematic to work with as it
contains many options and is hard to interpret. This is what you
usually see when you examine optimizer_switch:
index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,engine_condition_pushdown=on,index_condition_pushdown=on,mrr=on,mrr_cost_based=on,block_nested_loop=on,batched_key_access=off,materialization=on,semijoin=on,loosescan=on,firstmatch=on,subquery_materialization_cost_based=on,use_index_extensions=on
As you can see, seeing which option is on or off is rather
difficult. You can use the REPLACE function to make this
easier:
mysql> select replace(@@optimizer_switch, ',','\n')\G …[Read more]
In MySQL a big portion of the problems you’re facing is different replication delays so there’s no surprise this is one of the most common interview questions on DBA interviews and I still found people having problems with explaining what they would do in a certain situation. This is why I decided to write a bit about the subject.
1. IO
99% of times the reason is IO. The slave cannot keep up with the amount of writes it gets from the binary logs while in parallel it has to return results of queries as well. In spite of the common belief in MySQL it’s much easier to saturate the disk subsystem (even a raid 10 with 8 SAS disk and cache) than the network.
In this situation all you can do is try to remove some pressure from the slaves.
One way to do this is setting innodb_flush_log_at_trx_commit to 2 if it used to be 1. Usually this is enough.
…[Read more]