Learn how to set up and use Percona's clustercheck script in a MySQL Galera Cluster.
The post Percona’s Clustercheck Script for MySQL Galera appeared first on Datavail.
Learn how to set up and use Percona's clustercheck script in a MySQL Galera Cluster.
The post Percona’s Clustercheck Script for MySQL Galera appeared first on Datavail.
Sometimes we have apis implemented in our application and there
are different levels at which these can be tested.
1. Unit tested at model level to check the logic is working
fine
2. Tested at API call level to ascertain whether all the apis as
expected are working and are returning data as expected.
Today, we will be learning how to test APIs in CodeIgniter 2.x
version using phpunit and Guzzle Http client.
Basically Guzzle Http client is a client used to make http client
requests.
Ref: https://github.com/guzzle/guzzle
"Guzzle is a PHP HTTP client that makes it easy to send HTTP …
In the MySQL Labs version of MySQL version 5.7, there is a new HTTP plugin. The HTTP plugin documentation from the labs site provides this information (from MySQL Labs):
The HTTP Plugin for MySQL adds HTTP(S) interfaces to MySQL. Clients can use the HTTP respectively HTTPS (SSL) protocol to query data stored in MySQL. The query language is SQL but other, simpler interfaces exist. All data is serialized as JSON. This version of MySQL Server HTTP Plugin is a Labs release, which means it’s at an early development stage. It contains several known bugs and limitation, and is meant primarily to give you a rough idea how this plugin will look some day. Likewise, the user API is anything but finalized. Be aware it will change in many respects.
In …
[Read more]In this article I will describe how to test the plain and the encrypted SMTP/POP3/IMAP and HTTP protocols with telnet and the openssl s_client command.
list of references
For a complete list of available commands for the used protocols check the RFCs please:
SMTP sending mail
In the first example I will open a telnet connection to a SMTP Server on …
[Read more]https://code.launchpad.net/~stewart/drizzle/json-interface/+merge/59859
Currently a very early version of course, but it’s there in trunk if you want to play with it. Just have libcurl and libevent installed and you can submit queries via HTTP and JSON. Of course, the next steps are getting a true non-sql interface going and seeing how people go with it.
To get the HTTP Header informations for specific clients connections use ngrep and a pattern or a regular expression that will match the packets.
install ngrep (example for debian / ubuntu):
apt-get install ngrep
These examples dumps HTTP header for any connection matching the string “images” on port 80.
user@host:~# ngrep -qi -W normal '/images/' port 80 interface: lo (127.0.0.1/255.255.255.255) match: /images/ T 10.1.1.199:62073 -> 127.0.0.1:80 [AP] GET /images/globe_blogs.gif HTTP/1.1..Host: frederikkonietzny.de..User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; de; rv:1.9.2.12) Gecko/20101026 Firefox/3 .6.12..Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8..Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3..Accept-Encoding: gzip,deflate..Ac cept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7..Keep-Alive: 115..Connection: keep-alive..Cookie: …[Read more]
Some of you may have noticed that blob streaming has been merged into the main Drizzle tree recently. There are a few hooks inside the Drizzle kernel that PBMS uses, and everything else is just in the plug in.
For those not familiar with PBMS it does two things: provide a place (not in the table) for BLOBs to be stored (locally on disk or even out to S3) and provide a HTTP interface to get and store BLOBs.
This means you can do really neat things such as have your BLOBs replicated, consistent and all those nice databasey things as well as easily access them in a scalable way (everybody knows how to cache HTTP).
This is a great addition to the AlsoSQL arsenal of Drizzle. I’m looking forward to it advancing and being adopted (now much easier that it’s in the main repository)
Today, I was looking for a quick way to see HTTP response codes of a bunch of urls. Naturally, I turned to the curl command, which I would usually use like this:
curl -IL "URL"
This command would send a HEAD request (-I), follow through all redirects (-L), and display some useful information in the end. Most of the time it's ideal:
curl -IL "http://www.google.com" HTTP/1.1 200 OK Date: Fri, 11 Jun 2010 03:58:55 GMT Expires: -1 Cache-Control: private, max-age=0 Content-Type: text/html; charset=ISO-8859-1 Server: gws X-XSS-Protection: 1; mode=block Transfer-Encoding: chunked
However, the server I was curling didn't support HEAD requests explicitly. Additionally, I was really only interested in HTTP status codes and not in the rest of the output. This means I would have to change my strategy and issue GET requests, ignoring HTML output completely.
Curl manual to the rescue. A few …
[Read more]Introduction
StackOverflow is an amazing site for coding questions. It was created by Joel Spolsky of joelonsoftware.com, Jeff Atwood of codinghorror.com, and some other incredibly smart guys who truly care about user experience. I have been a total fan of SO since it went mainstream and it's now a borderline addiction (you can see my StackOverflow badge on the right sidebar).
The Story
Update 6/21/09: This server is currently under very heavy load (10-200), even with caching plugins enabled. Please bear with me as I try to resolve the situation.
Feel free to …
[Read more]Since the day one when I joined Scribd, I was thinking about the fact that 90+% of our traffic is going to the document view pages, which is a single action in our documents controller. I was wondering how could we improve this action responsiveness and make our users happier.
Few times I was creating a git branches and hacking this action
trying to implement some sort of page-level caching to make
things faster. But all the time results weren’t as good as I’d
like them to be. So, branches were sitting there and waiting for
a better idea.
Few months ago my
good friend has joined Scribd and we’ve started thinking on
this problem together. As the result of our brainstorming we’ve
managed to figure out what were the problems preventing us from
doing efficient caching: …