Using Failover Manager v4
Failover Manager offers support for monitoring and failover of clusters with one or more standby servers. You can add or remove nodes from the cluster as your demand for resources grows or shrinks.
If a primary node reboots, Failover Manager might detect the database is down on the primary node and promote a standby node to the role of primary. If this happens, the Failover Manager agent on the rebooted primary node attempts to write a recovery.conf
file to make sure Postgres doesn't start as a secondary primary. Therefore, you must start the Failover Manager agent before starting the database server. The agent starts in idle mode and checks to see if there is already a primary in the cluster. If there is a primary node, the agent verifies that a recovery.conf
or standby.signal
file exists or creates recovery.conf
, if needed, to prevent the database from starting as a second primary.
Managing a Failover Manager cluster
Once configured, a Failover Manager cluster requires no regular maintenance. However, you can perform management tasks that a Failover Manager cluster might occasionally require.
By default, some of the efm commands must be invoked by efm or an OS superuser. An administrator can selectively permit users to invoke these commands by adding the user to the efm group. The commands are:
- efm allow-node
- efm disallow-node
- efm promote
- efm resume
- efm set-priority
- efm stop-cluster
- efm upgrade-conf
Starting the Failover Manager cluster
You can start the nodes of a Failover Manager cluster in any order.
To start the Failover Manager cluster on RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, assume superuser privileges, and invoke the command:
systemctl start edb-efm-4.<x>
If the cluster properties file for the node specifies that is.witness
is true
, the node starts as a witness node.
If the node is not a dedicated witness node, Failover Manager connects to the local database and invokes the pg_is_in_recovery()
function. If the server responds false
, the agent assumes the node is a primary node and assigns a virtual IP address to the node if applicable. If the server responds true
, the Failover Manager agent assumes that the node is a standby server. If the server doesn't respond, the agent starts in an idle state.
After joining the cluster, the Failover Manager agent checks the supplied database credentials to ensure that it can connect to all of the databases within the cluster. If the agent can't connect, the agent shuts down.
If a new primary or standby node joins a cluster, all of the existing nodes also confirm that they can connect to the database on the new node.
Note
If you are running /var/lock
or /var/run
on tmpfs
(Temporary File System), make sure that the systemd service file for Failover Manager has a dependency on systemd-tmpfiles-setup.service
.
Adding nodes to a cluster
You can add a node to a Failover Manager cluster at any time. When you add a node to a cluster, you must modify the cluster to allow the new node, and then tell the new node how to find the cluster.
Unless
auto.allow.hosts
is set totrue
, use theefm allow-node
command to add the address of the new node to the Failover Manager allowed node host list. When invoking the command, specify the cluster name and the address of the new node:efm allow-node <cluster_name> <address>
For more information about using the
efm allow-node
command or controlling a Failover Manager service, see Using the efm utility.Install a Failover Manager agent and configure the cluster properties file on the new node. For more information about modifying the properties file, see The cluster properties file.
Configure the cluster members file on the new node, adding an entry for the membership coordinator. For more information about modifying the cluster members file, see The cluster members file.
Assume superuser privileges on the new node, and start the Failover Manager agent. To start the Failover Manager cluster on RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, invoke the command:
systemctl start edb-efm-4.<x>
When the new node joins the cluster, Failover Manager sends a notification to the administrator email provided in the user.email
property, and invokes the specified notification script.
Note
To be a useful standby for the current node, the node must be a standby in the PostgreSQL Streaming Replication scenario.
Changing the priority of a standby
If your Failover Manager cluster includes more than one standby server, you can use the efm set-priority
command to influence the promotion priority of a standby node. Invoke the command on any existing member of the Failover Manager cluster, and specify a priority value after the IP address of the member.
For example, the following command instructs Failover Manager that the acctg
cluster member that's monitoring 10.0.1.9
is the primary standby (1)
:
You can set the priority of a standby to 0
to make the standby nonpromotable. Setting the priority of a standby to a value greater than 0
overrides a property value of promotable=false
.
For example, if the properties file on node 10.0.1.10
includes a setting of promotable=false
and you use efm set-priority
to set the promotion priority of 10.0.1.10
to be the standby used in the event of a failover. The value designated by the efm set-priority
command overrides the value in the property file:
In the event of a failover, Failover Manager first retrieves information from Postgres streaming replication to confirm which standby node has the most recent data and promote the node with the least chance of data loss. If two standby nodes contain equally up-to-date data, the node with a higher user-specified priority value is promoted to primary unless use.replay.tiebreaker is set to true
. To check the priority value of your standby nodes, use the command:
efm cluster-status <cluster_name>
Note
The promotion priority for nodes changes when a new primary is promoted.
If the efm set-priority
command was used to change whether a standby is promotable, it may be reset to the value in the standby's properties file through promotion or cluster splits and rejoins. If the agent is restarted, the promotable status reverts to the value in the properties file.
Promoting a Failover Manager node
You can invoke efm promote
on any node of a Failover Manager cluster to start a manual promotion of a standby database to primary database.
Perform manual promotion only during a maintenance window for your database cluster. If you don't have an up-to-date standby database available, you are prompted before continuing. To start a manual promotion, assume the identity of efm or the OS superuser, and invoke the command:
efm promote <cluster_name> [-switchover] [-sourcenode <address>] [-quiet] [-noscripts]
Where:
<cluster_name>
is the name of the Failover Manager cluster.
Include the –switchover
option to reconfigure the original primary as a standby. If you include the –switchover
keyword, the cluster must include a primary node and at least one standby, and the nodes must be in sync.
Include the –sourcenode
keyword to specify the node from which to copy the recovery settings to the primary.
Include the -quiet
keyword to suppress notifications during switchover.
Include the -noscripts
keyword to instruct Failover Manager not to invoke fencing and post-promotion scripts.
During switchover:
- For server versions 11 and prior, the
recovery.conf
file is copied from an existing standby to the primary node. For server version 12 and later, theprimary_conninfo
andrestore_command
parameters are copied and stored in memory. - The primary database is stopped.
- If you are using a VIP, the address is released from the primary node.
- A standby is promoted to replace the primary node and acquires the VIP.
- The address of the new primary node is added to the
recovery.conf
file or theprimary_conninfo
details are stored in memory. - If the
application.name
property is set for this node, the application_name property is added to therecovery.conf
file or theprimary_conninfo
information is stored in memory. - If you're using server version 12 or later, the recovery settings that were stored in memory are written to the
postgresql.auto.conf
file. Astandby.signal
file is created. - The old primary is started; the agent resumes monitoring it as a standby.
During a promotion, the primary agent releases the virtual IP address. If it isn't a switchover, a recovery.conf
file is created in the directory specified by the db.data.dir
property. The recovery.conf
file is used to prevent the old primary database from starting until the file is removed, preventing the node from starting as a second primary in the cluster. If the promotion is part of a switchover, recovery settings are handled as described above.
The primary agent remains running and assumes a status of Idle.
The standby agent confirms that the virtual IP address is no longer in use before pinging a well- known address to ensure that the agent isn't isolated from the network. The standby agent runs the fencing script and promotes the standby database to primary. The standby agent then assigns the virtual IP address to the standby node and runs the post-promotion script (if applicable).
This command instructs the service to ignore the value specified in the auto.failover
parameter in the cluster properties file.
To return a node to the role of primary, place the node first in the promotion list:
efm set-priority <cluster_name> <address> <priority>
Then, perform a manual promotion:
efm promote <cluster_name> ‑switchover
For more information about the efm utility, see Using the efm utility.
Stopping a Failover Manager agent
When you stop an agent, Failover Manager removes the node's address from the cluster members list on all of the running nodes of the cluster but doesn't remove the address from the Failover Manager Allowed node host list.
To stop the Failover Manager agent on RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, assume superuser privileges and invoke the command:
systemctl stop edb-efm-4.<x>
Until you invoke the efm disallow-node
command (removing the node's address from the Allowed Node host list), you can use the service edb-efm-4.<x> start
command to restart the node later without first running the efm allow-node
command again.
Stopping an agent doesn't signal the cluster that the agent has failed unless the primary.shutdown.as.failure property is set to true
.
Stopping a Failover Manager cluster
To stop a Failover Manager cluster, connect to any node of a Failover Manager cluster, assume the identity of efm or the OS superuser, and invoke the command:
efm stop-cluster <cluster_name>
The command causes all Failover Manager agents to exit. Terminating the Failover Manager agents completely disables all failover functionality.
Note
When you invoke the efm stop-cluster
command, all authorized node information is lost from the Allowed Node host list.
Removing a node from a cluster
The efm disallow-node
command removes the IP address of a node from the Failover Manager Allowed Node host list. Assume the identity of efm or the OS superuser on any existing node that's currently part of the running cluster. Then invoke the efm disallow-node
command, specifying the cluster name and the IP address of the node:
efm disallow-node <cluster_name> <address>
The efm disallow-node
command doesn't stop a running agent. The service continues to run on the node until you stop the agent. If the agent or cluster is later stopped, the node isn't allowed to rejoin the cluster and is removed from the failover priority list. It becomes ineligible for promotion.
After invoking the efm disallow-node
command, you must use the efm allow-node command to add the node to the cluster again.
Running multiple agents on a single node
You can monitor multiple database clusters that reside on the same host by running multiple primary or standby agents on that Failover Manager node. You can also run multiple witness agents on a single node. To configure Failover Manager to monitor more than one database cluster, while ensuring that Failover Manager agents from different clusters don't interfere with each other:
- Create a cluster properties file for each member of each cluster that defines a unique set of properties and the role of the node within the cluster.
- Create a cluster members file for each member of each cluster that lists the members of the cluster.
- Customize the unit file (on a RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x system) for each cluster to specify the names of the cluster properties and the cluster members files.
- Start the services for each cluster.
These examples use two database clusters (acctg and sales) running on the same node:
- Data for
acctg
resides in/opt/pgdata1
; its server is monitoring port5444
. - Data for
sales
resides in/opt/pgdata2
; its server is monitoring port5445
.
To run a Failover Manager agent for both of these database clusters, use the efm.properties.in
template to create two properties files. Each cluster properties file must have a unique name. This example creates acctg.properties
and sales.properties
to match the acctg
and sales
database clusters.
The following parameters must be unique in each cluster properties file:
admin.port
bind.address
db.port
db.data.dir
virtual.ip
(if used)
db.service.name
(if used)
In each cluster properties file, the db.port
parameter specifies a unique value for each cluster. The db.user
and db.database
parameter can have the same value or a unique value. For example, the acctg.properties
file can specify:
db.user=efm_user
db.password.encrypted=7c801b32a05c0c5cb2ad4ffbda5e8f9a
db.port=5444
db.database=acctg_db
While the sales.properties
file can specify:
db.user=efm_user
db.password.encrypted=e003fea651a8b4a80fb248a22b36f334
db.port=5445
db.database=sales_db
Some parameters require special attention when setting up more than one Failover Manager cluster agent on the same node. If multiple agents reside on the same node, each port must be unique. Any two ports can work, but it's easier to keep the information clear if using ports that aren't too close to each other.
When creating the cluster properties file for each cluster, the db.data.dir
parameters must also specify values that are unique for each respective database cluster.
Use the following parameters when assigning the virtual IP address to a node. If your Failover Manager cluster doesn't use a virtual IP address, leave these parameters blank.
virtual.ip
virtual.ip.interface
virtual.ip.prefix
This parameter value is determined by the virtual IP addresses being used and can be the same for both acctg.properties
and sales.properties
.
After creating the acctg.properties
and sales.properties
files, create a service script or unit file for each cluster that points to the respective property files. This step is platform specific. If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, see RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x.
Note
If you're using a unit file, manually update the file to reflect the new service name when you upgrade Failover Manager.
RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x
If you're using RHEL/CentOS 7.x or RHEL/Rocky Linux/AlmaLinux 8.x, copy the service file /usr/lib/systemd/system/edb-efm-4.<x>.service
to /etc/systemd/system
with a new name that's unique for each cluster.
For example, if you have two clusters named acctg
and sales
managed by Failover Manager 4.7, the unit file names might be efm-acctg.service
and efm-sales.service
. You can create them with:
Then use systemctl edit
to edit the CLUSTER
variable in each unit file, changing the specified cluster name from efm
to the new cluster name.
Also update the value of the PIDfile
parameter to match the new cluster name.
In this example, edit the acctg
cluster by running systemctl edit efm-acctg.service
and write:
Edit the sales
cluster by running systemctl edit efm-sales.service
and write:
Note
You can also edit the files in /etc/systemd/system
directly, but then you have to run systemctl daemon-reload
. This step is unecessary when using systemd edit
to change the override files.
After saving the changes, enable the services:
Then, use the new service scripts to start the agents. For example, to start the acctg
agent:
For information about customizing a unit file, see Understanding and administering systemd.