Showing entries 21 to 30 of 94
« 10 Newer Entries | 10 Older Entries »
Displaying posts with tag: Clustering (reset)
Global Read-Scaling using Continuent Clustering

Did you know that Continuent Clustering supports having clusters at multiple sites world-wide with either active-active or active-passive replication meshing them together?

Not only that, but we support a flexible hybrid model that allows for a blended architecture using any combination of node types. So mix-and-match your highly available database layer on bare metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware, etc.

In this article we will discuss using the Active/Passive model to scale reads worldwide.

The model is simple: select one site as the Primary where all writes will happen. The rest of the sites will pull events as quickly as possible over the WAN and make the data available to all local clients. This means your application gets the best of both worlds:

  • Simple deployment with no application changes needed. All writes …
[Read more]
Mastering Continuent Clustering Series: Converting a standalone cluster to a Composite Primary/DR topology using INI configuration

In this blog post, we demonstrate how to convert a single standalone cluster into a Composite Primary/DR topology running in two data centers.

Our example starting cluster has 5 nodes (1 master and 4 slaves) and uses service name alpha. Our target cluster will have 6 nodes (3 per cluster) in 2 member clusters alpha_east and alpha_west in composite service alpha.

This means that we will reuse the existing service name alpha as the name of the new composite service, and create two new service names, one for each cluster (alpha_east and alpha_west).

Below is an INI file extract example for our starting standalone cluster with 5 nodes:

[defaults]
...

[alpha]
connectors=db1,db2,db3,db4,db5
master=db1
members=db1,db2,db3,db4,db5
topology=clustered

To convert the above configuration to a Composite Primary/DR:

  1. First you must stop all services on all existing nodes:
    shell> stopall
    
[Read more]
Mastering Continuent Clustering Series: Tungsten and SELinux, a Case Study

In this blog post, we talk about what happened during an installation of the Tungsten Cluster into an environment with SELinux running and mis-configured.

An attempt to execute `tpm install` on v5.3.2 recently failed with the below error:

ERROR >> node3_production_customer_com >> Unable to run 'sudo systemctl status mysqld.service' or the database server is not running (DatasourceBootScriptCheck) 
Update the /etc/sudoers file or disable sudo by adding --enable-sudo-access=false 

Worse, this customer reported that this appeared as a WARNING only in Dev and Staging tests. So we checked, and it seemed we were able to access systemctl properly:

shell> sudo systemctl status mysqld.service
● mysqld.service - MySQL Percona Server
Loaded: loaded (/usr/lib/systemd/system/mysqld.service; enabled; vendor preset: disabled)
Active: activating (start-post) since Tue 2018-06-19 17:46:19 BST; 1min 15s ago …
[Read more]
Mastering Continuent Clustering Series: Configuring Startup on Boot

In this blog post, we talk about how to configure automatic start at boot time for the Tungsten Clustering components.

By default, Tungsten Clustering does not start automatically on boot. To enable Tungsten Clustering to start at boot time, use the deployall script provided to create the necessary boot scripts:

shell> sudo /opt/continuent/tungsten/cluster-home/bin/deployall

To disable automatic startup at boot time, use the undeployall command:

shell> sudo /opt/continuent/tungsten/cluster-home/bin/undeployall

For Multisite/Multimaster deployments in specific, there are separate cross-site replicators running. In this case, a custom startup script must be created, otherwise the replicator will be unable to start as it has been configured in a different directory.

  1. Create a link from the Tungsten Replicator service startup script in the operating system …
[Read more]
How Bluefin ensures 24/7/365 operation and application availability for PayConex and Decryptx with Continuent Clustering

Join MC Brown, VP Products at Continuent, on August 8th for our new webinar on high availability and disaster recovery for MySQL, MariaDB and Percona Server with Continuent Clustering. Learn how Bluefin Payment Systems provides 24/7/365 operation and application availability for their PayConex payment gateway and Decryptx decryption-as-a-service, essential for Point-Of-Sale solutions in retail, mobile, call centers and kiosks.

We’ll discuss why Bluefin uses Continuent Clustering, and how Bluefin runs two co-located data centers with multimaster replication between each cluster in each data center, with full fail-over within the cluster and between clusters, handling 350 million records each month.

Tune in …

[Read more]
Mastering Continuent Clustering Series: Automatic Reconnect in the Tungsten Connector

In this blog post, we talk about the Automatic Reconnect feature in the Tungsten Connector.

Automatic reconnect enables the Connector to re-establish a connection in the event of a transient failure. Under specific circumstances, the Connector will also retry the query.

Connector automatic reconnect is enabled by default in Proxy and Smartscale modes.

Use the following tpm command option on the command line (remove the leading hyphens inside INI files):

--connector-autoreconnect=false

to disable automatic reconnect.

This feature is not available while running in Bridge Mode. Use the tpm command option --connector-bridge-mode=false to disable Bridge mode.

Automatic reconnect enables retries of statements under the following circumstances:

  • not in bridge mode
  • not inside a transaction
  • no temp table has been created
[Read more]
Mastering Continuent Clustering Series: Manual Switch Behavior Tuning in the Tungsten Connector

In this blog post, we talk about how existing client connections are handled by the Tungsten Connector when a manual master role switch is invoked and how to adjust that behavior.

When a graceful switch is invoked via cctrl or the Tungsten Dashboard, by default the Connector will wait for five (5) seconds to allow in-flight activities to complete before forcibly disconnecting all active connections from the application side, no matter what type of query was in use.

If connections still exist after the timeout interval, they are forcibly closed, and the application will get back an error.

This configuration setting ONLY applies to a manual switch. During a failover caused by loss of MySQL availability, there is no wait and all connections are force-closed immediately.

This timeout is adjusted via the tpm option …

[Read more]
Mastering Continuent Clustering Series: Experience the Power of the Tungsten Connector, an Intelligent MySQL Proxy

In this blog post, we talk about the basic function and features of the Tungsten Connector.

The Tungsten Connector is an intelligent MySQL proxy that provides key high-availability and read-scaling features. This includes the ability to route MySQL queries by inspecting them in-flight.

The most important function of the Connector is failover handling. When the cluster detects a failed master because the MySQL server port is no longer reachable, the Connectors are signaled and traffic is re-routed to the newly-elected Master node.

Next is the ability to route MySQL queries based on various factors. In the default Bridge mode, traffic is routed at the TCP layer, and read-only queries must be directed to a different port (normally 3306 for writes and 3307 for reads).

There are additional modes, Proxy/Direct and …

[Read more]
Mastering Continuent Clustering Series: Connection Handling in the Tungsten Connector

In this blog post, we talk about how query connections are handled by the Tungsten Connector, especially read-only connections.

There are multiple ways to configure session handling in the Connector. The three main modes are Bridge, Proxy/Direct and Proxy/SmartScale.

In Bridge mode, the data source to connect to is chosen ONCE for the lifetime of the connection, which means that the selection of a different node will only happen if a NEW connection is opened through the Connector.

So if your application reuses its connections, all traffic sent through that session will continue to land on the selected read slave, i.e., when using connection pooling.

http://docs.continuent.com/tungsten-clustering-6.0/connector-bridgemode.html

The key difference is in how the slave latency checking is handled: …

[Read more]
Mastering Continuent Clustering Series: Global Clustering with Active/Active Meshed Replication

Did you know that Continuent Clustering supports having clusters at multiple sites world-wide with active-active replication meshing them together?

Not only that, but we support a flexible hybrid model that allows for a blended architecture using any combination of node types. So mix-and-match your highly available database layer on bare metal, Amazon Web Services (AWS), Azure, Google Cloud, VMware, etc.

The possibilities are endless, as is the business value. This strong topology allows you to have all the benefits of high availability with local reads and writes, while spreading that data globally to be accessible in all regions. Latency is limited only by the WAN link and the speed of the target node.

This aligns perfectly with the distributed Software-as-a-Service (SaaS) model where customers and data span the globe. Applications have access to ALL the …

[Read more]
Showing entries 21 to 30 of 94
« 10 Newer Entries | 10 Older Entries »