Showing entries 211 to 220 of 554
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: cloud (reset)
Smart Update Strategy in Percona Kubernetes Operator for Percona XtraDB Cluster

In Percona Kubernetes Operator for Percona XtraDB Cluster (PXC) versions prior to 1.5.0, there were two methods for upgrading PXC clusters, and both of these use built-in StatefulSet update strategies. The first one is manual (OnDelete update strategy) and the second one is semi-automatic (RollingUpdate strategy). Since the Kubernetes operator is about automating the database management, and there are use cases to always keep the database up to date, a new smart update strategy was implemented.

Smart Update Strategy

The smart update strategy can be used to enable automatic context-aware upgrades of PXC clusters between minor versions. One of the use cases for automatic upgrades is if you want to get security …

[Read more]
Updates to Percona Kubernetes Operator for Percona XtraDB Cluster

On July 21, 2020, Percona delivered an updated version of our Percona Kubernetes Operator for Percona XtraDB Cluster (PXC) focused on easing deployment and operations management of a clustered MySQL environment. Included in the Percona Distribution for MySQL, our Operator is based on the best practices for MySQL cluster configuration and setup in Kubernetes. This update adds a variety of important new features including:

Smart Update to Safely and Reliably Upgrade your PXC Environment Automatically
We implemented a new update strategy called Smart Update. Smart Update is aware of the context of your environment and minimizes the number of failover events that need to occur to fully upgrade a …

[Read more]
Backing Up Percona Kubernetes Operator for Percona XtraDB Cluster Databases to Google Cloud Storage

The Percona Kubernetes Operator for Percona XtraDB Cluster can send backups to Amazon S3 or S3-compatible storage. And every now and then at Support, we are asked how to send backups to Google Cloud Storage.

Google Cloud Storage offers an “interoperability mode” which is S3-compatible. However, there are a few details to take care of when using it.

Google Cloud Storage Configuration

First, select “Settings” under “Storage” in the Navigation Menu. Under Settings, select the Interoperability tab. If Interoperability is not yet enabled, click Enable Interoperability Access. This turns on the S3-compatible interface to Google Cloud Storage.

After enabling S3-compatible storage, an access key needs to be generated. There are two options: Access keys can be tied to Service accounts or User accounts. For …

[Read more]
Scaling the Percona Kubernetes Operator for Percona XtraDB Cluster

You got yourself a Kubernetes cluster and are now testing our Percona Kubernetes Operator for Percona XtraDB Cluster. Everything is working great and you decided that you want to increase the number of Percona XtraDB Cluster (PXC) pods from the default 3, to let’s say, 5 pods.

It’s just a matter of running the following command:

kubectl patch pxc cluster1 --type='json' -p='[{"op": "replace", "path": "/spec/pxc/size", "value": 5 }]'

Good, you run the command without issues and now you will have 5 pxc pods! Right? Let’s check out how the pods are being replicated:

kubectl get pods | grep pxc
cluster1-pxc-0                                     1/1     Running   0          25m
cluster1-pxc-1                                     1/1     Running   0          23m
cluster1-pxc-2                                     1/1     Running …
[Read more]
ProxySQL Behavior in the Percona Kubernetes Operator for Percona XtraDB Cluster

The Percona Kubernetes Operator for Percona XtraDB Cluster(PXC) comes with ProxySQL as part of the deal. And to be honest, the behavior of ProxySQL is pretty much the same as in a regular non-k8s deployment of it. So why bother to write a blog about it? Because what happens around ProxySQL in the context of the operator is actually interesting.

ProxySQL is deployed on its own POD (that can be scaled as well as the PXC Pods can). Each ProxySQL Pod has its own ProxySQL Container and a sidecar container. If you are curious, you can find out which node holds the pod by running

kubectl describe pod cluster1-proxysql-0 | grep Node:
Node: ip-192-168-37-111.ec2.internal/192.168.37.111

Login into and ask for the running containers. You will see something like this:

[root@ip-192-168-37-111 ~]# docker ps | grep -i proxysql …
[Read more]
MySQL 8.0 InnoDB Cluster with WordPress in OCI – part III

With this post we are reaching the end of our journey to HA for WordPress & MySQL 8.0 on OCI.

If you have not read the two previous articles, this is just the right time.

We started this trip using the MySQL InnoDB ReplicaSet where only 2 servers are sufficient but doesn’t provide automatic fail-over.

In this article we will upgrade our InnoDB ReplicaSet to …

[Read more]
MySQL 8.0 InnoDB ReplicaSet with WordPress in OCI – part II

This article is the second part of our journey to WordPress and MySQL 8.0 High Availability on OCI. The first part can be read here.

We ended part I with one webserver hosting WordPress. This WordPress was connecting locally to MySQL Router using HyperDB add-on. This add-on allows to split the reads & writes on MySQL Servers using replication. And finally we had one MySQL InnoDB ReplicaSet of two members …

[Read more]
MySQL 8.0 InnoDB ReplicaSet with WordPress in OCI

Today’s article is again related to WordPress and MySQL 8.0. We will see how we can setup MySQL InnoDB ReplicaSet and configure WordPress to split the load using both MySQL Instances: we will split reads and writes between the Primary and the Secondary member of our ReplicaSet.

This will be the first part of our journey to achieve HA for our WordPress site on OCI and using all MySQL Servers we have deployed. We don’t want to have a server idle just waiting to take over in case of an incident.

MySQL InnoDB ReplicaSet

First some words about MySQL InnoDB ReplicaSet.

The ease of use of

[Read more]
Exploring MySQL Binlog Server – Ripple

MySQL does not limit the number of slaves that you can connect to the master server in a replication topology. However, as the number of slaves increases, they will have a toll on the master resources because the binary logs will need to be served to different slaves working at different speeds. If the data churn on the master is high, the serving of binary logs alone could saturate the network interface of the master.

A classic solution for this problem is to deploy a binlog server – an intermediate proxy server that sits between the master and its slaves. The binlog server is set up as a slave to the master, and in turn, acts as a master to the original set of slaves. It receives binary log events from the master, does not apply these events, but serves them to all the other slaves. This way, the load on the master is tremendously reduced, and at the same time, the binlog server serves …

[Read more]
Backup and Restore in Percona Kubernetes Operator for Percona XtraDB Cluster

Database backups are a fundamental requirement in almost every implementation, no matter the size of the company or the nature of the application. Taking a backup should be a simple task that can be automated to ensure it’s done consistently and on schedule. Percona has an enterprise-grade backup tool, Percona XtraBackup, that can be used to accomplish these tasks. Percona also has a Percona Kubernetes Operator for Percona XtraDB Cluster (PXC Operator), which has Percona XtraBackup built into it. Percona XtraBackup has the ability for both automated and on-demand backups. Today we will explore taking backups and restoring these backups using the PXC Operator deployed …

[Read more]
Showing entries 211 to 220 of 554
« 10 Newer Entries | 10 Older Entries »