9 Adding and Deleting Nodes and Instances

This chapter describes how to add and remove nodes and instances in Oracle Real Application Clusters (Oracle RAC) environments. You can use these methods when configuring a new Oracle RAC environment, or when resizing an existing Oracle RAC environment.

This chapter includes the following sections:

Note:

For this chapter, it is very important that you perform each step in the order shown.

See Also:

Preparing the New Node

Before a node can be added to the cluster, you must perform the same preinstallation steps on the new node as you did for all the existing nodes in the cluster. This includes the following tasks:

  • Checking hardware compatibility

  • Configuring the operating system

  • Configuring SSH connectivity between the new node and the other cluster members

  • Configuring access to shared storage

  • Creating groups, users, and directories

Refer to Chapter 2, "Preparing Your Cluster" for details on these tasks.

Verifying the New Node Meets the Prerequisites for Installation

When adding a node to an existing cluster, the new node must match the configuration of the other nodes in the cluster. The new nodes must run the same operating system and version of the operating system as the other nodes. The new computer must also use the same chip architecture (32-bit or 64-bit) as the other nodes. However, you can have computers of different speeds and sizes in the same cluster.

After you have configured the new nodes, you should use cluvfy to verify that all the requirements for installation have been met. To verify the new node meets the hardware requirement, run the following command on an existing node (for example, either racnode1 or racnode2) from the Grid_home/bin directory:

cluvfy stage -pre crsinst -n racnode3 -verbose

Extending the Oracle Grid Infrastructure Home to the New Node

Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to add a Grid home to the node being added to your cluster.

This section assumes that you are adding a node named racnode3 and that you have successfully installed Oracle Clusterware on racnode1 in a nonshared home, where Grid_home represents the successfully installed Oracle Clusterware home.

To extend the Oracle Grid Infrastructure for a cluster home to include the new node:

  1. Verify the new node has been properly prepared for an Oracle Clusterware installation by running the following CLUVFY command on the racnode1 node:

    cluvfy stage -pre nodeadd -n racnode3 -verbose
    
  2. As the oracle user (owner of the Oracle Grid Infrastructure for a cluster software installation) on racnode1, go to Grid_home/oui/bin and run the addNode.sh script in silent mode:

    If you are using Grid Naming Service (GNS):

    ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
    

    If you are not using Grid Naming Service (GNS):

    ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}" "CLUSTER_NEW_VIRTUAL_
    HOSTNAMES={racnode3-vip}"
    

    When running this command, the curly braces ( { } ) are not optional and must be included or the command returns an error.

    You can alternatively use a response file instead of placing all the arguments in the command line. See Oracle Clusterware Administration and Deployment Guide for more information on using response files.

  3. When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.

  4. If you are not using Oracle Grid Naming Service (GNS), then you must add the name and address for racnode3 to DNS.

You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command on the newly configured node, racnode3:

$ cd /u01/app/11.2.0/grid/bin
$ ./cluvfy stage -post nodeadd -n racnode3 -verbose

Note:

Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

See Also:

Extending the Oracle RAC Home Directory

Now that you have extended the Grid home to the new node, you must extend the Oracle home on racnode1 to racnode3. The following steps assume that you have completed the tasks described in the previous sections, "Preparing the New Node" and "Extending the Oracle Grid Infrastructure Home to the New Node", and that racnode3 is a member node of the cluster to which racnode1 belongs.

The procedure for adding an Oracle home to the new node is very similar to the procedure you just completed for extending the Grid home to the new node.

To extend the Oracle RAC installation to include the new node:

  1. Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace Oracle_home with the location of your installed Oracle home directory.

  2. Go to the Oracle_home/oui/bin directory on racnode1 and run the addNode.sh script in silent mode as shown in the following example:

    $ cd /u01/app/oracle/product/11.2.0/dbhome_1/oui/bin
    $ ./addNode.sh -silent "CLUSTER_NEW_NODES={racnode3}"
    
  3. When the script finishes, run the root.sh script as the root user on the new node, racnode3, from the Oracle home directory on that node.

    For policy-managed databases with Oracle Managed Files (OMF) enabled, no further actions are needed.

    For a policy-managed database, when you add a new node to the cluster, it is placed in the Free pool by default. If you increase the cardinality of the database server pool, then an Oracle RAC instance is added to the new node, racnode3, and it is moved to the database server pool. No further action is necessary.

  4. Add shared storage for the undo tablespace and redo log files.

    If OMF is not enabled for your database, then you must manually add an undo tablespace and redo logs.

  5. If you have an administrator-managed database, then add a new instance on the new node as described in "Creating an Instance on the New Node".

    If you followed the installation instructions in this guide, then your cluster database is an administrator-managed database and stores the database files on Oracle Automatic Storage Management (Oracle ASM) with OMF enabled.

After completing these steps, you should have an installed Oracle home on the new node.

See Also:

Adding the New Node to the Cluster using Enterprise Manager

If you followed the steps described in "Extending the Oracle RAC Home Directory", then the cluster node is added to the cluster by the addNode.sh script. After the software is started on the new node, it is detected by Oracle Enterprise Manager. If an Oracle Enterprise Manager agent was not installed on the new node, then an alert is issued for that host, with the message "Incomplete configuration".

See Also:

Creating an Instance on the New Node

You can add an instance to the cluster using either the Instance Management option of Database Configuration Assistant (DBCA) or using Enterprise Manager. Before using either of these options you must first configure the new node to be a part of the cluster and install the software on the new node as described in the previous sections.

There are two methods of adding an instance to the new node:

Note:

The steps described in this section require a license for the Enterprise Manager Provisioning Management pack. Refer to the Oracle Database Licensing Information for information about the availability of these features on your system.

Adding a New Instance for a Policy-Managed Database

To add an instance to a policy-managed databases, you simply increase the cardinality of the server pool for the database. The database instance and Oracle ASM instance on the new node are created and configured automatically when a node is added to the server pool.

To add an instance to a policy-managed database using Enterprise Manager:

  1. From the Cluster Database Home page, click Server.

  2. Under the heading Change Database, click Add Instance.

    Description of add_instance1.gif follows
    Description of the illustration add_instance1.gif

    The Add Instance: Cluster Credentials page appears.

  3. Enter the credentials of the Oracle Clusterware software owner and the SYSASM user, then click Next.

    After the credentials have been validated, the Edit Server Pool page appears.

  4. Modify the server pool settings then click Submit.

  5. Click the Database tab to return to the Cluster Database Home page.

    Review the number of instances available for your cluster database.

See Also:

Adding a New Instance for an Administrator-Managed Database

When adding an instance to an administrator-managed databases, you must specify the name of the database instance and which node it should run on.

To add an instance to an administrator-managed database using Enterprise Manager:

  1. From the Cluster Database Home page, click Server.

  2. Under the heading Change Database, click Add Instance.

    Description of add_instance1.gif follows
    Description of the illustration add_instance1.gif

    The Add Instance: Cluster Credentials page appears.

  3. Enter the credentials of the Oracle Clusterware software owner and for the SYSASM user, then click Next.

    After the credentials have been validated, the Add Instance: Host page appears.

  4. In the Name of the database instance to be added field, either use the default instance name, or enter a different name for the database instance, such as racnode3.

  5. Select the node on which you want to create the new instance, then Next.

    Note:

    This procedure assumes that the Oracle Database software is configured on the selected node and that there is no instance for the cluster database currently running on the selected node.
    Description of add_instance2.gif follows
    Description of the illustration add_instance2.gif

    After the selected host has been validated, the Add Instance: Review page appears.

  6. Review the information, then click Submit Job to proceed.

    A confirmation page appears.

  7. Click View Job to check on the status of the submitted job.

    The Job Run detail page appears.

  8. Click your browser's Refresh button until the job shows a status of Succeeded or Failed.

    If the job shows a status of Failed, then you can click the name of the step that failed to view the reason for the failure.

  9. Click the Database tab to return to the Cluster Database Home page.

    The number of instances available in the cluster database is increased by one.

See Also:

Deleting an Instance From the Cluster Database

Sometimes, it might be necessary to remove a database instance from your cluster database. For example, to retire or repurpose a server, you first remove the database instance running on that server.

You can delete an instance from the cluster using either the Instance Management option of Database Configuration Assistant (DBCA) or using Enterprise Manager. If you choose to use DBCA to delete an instance, then start the DBCA utility from a node that will remain part of the cluster.

When you delete an instance from a clustered database, there are two methods:

Note:

The steps described in this section require a license for the Enterprise Manager Provisioning Management pack. Refer to the Oracle Database Licensing Information for information about the availability of these features on your system.

Deleting an Instance From a Policy-Managed Database

To delete an instance from a policy-managed databases, you simply decrease the cardinality of the server pool for the database. The database instance and Oracle ASM instance on the deallocated node are removed from the cluster and the node is either reassigned to another server pool or placed in the Free pool.

To delete a policy-managed database instance using Enterprise Manager:

  1. Log in to Oracle Enterprise Manager Database Control (Database Control) as a SYSDBA user.

  2. From the Cluster Database Home page, click Server.

  3. On the Server subpage, under the heading Change Database, click Delete Instance.

    Description of delete_instance1.gif follows
    Description of the illustration delete_instance1.gif

    The Edit Server Pool page appears.

  4. Enter the credentials for the Oracle Database software owner and the SYSASM user, then click Next.

    The Delete Instance: Database Instance page appears

  5. Modify the server pool settings then click Submit.

  6. Click the Database tab to return to the Cluster Database Home page.

    Review the number of instances available for your cluster database.

See Also:

Deleting an Instance From an Administrator-Managed Database

To remove an instance from an administrator-managed database, you must specify the name of the database instance to delete and which node the instance is running on.

To delete an administrator-managed database instance using Enterprise Manager:

  1. Log in to Oracle Enterprise Manager Database Control (Database Control) as a SYSDBA user.

  2. From the Cluster Database Home page, click Server.

  3. On the Server subpage, under the heading Change Database, click Delete Instance.

    Description of delete_instance1.gif follows
    Description of the illustration delete_instance1.gif

    The Delete Instance: Cluster Credentials page appears.

  4. Enter the credentials for the Oracle Database software owner and the SYSASM user, then click Next.

    The Delete Instance: Database Instance page appears

  5. Select the instance you want to delete, then click Next.

    Description of delete_instance2.gif follows
    Description of the illustration delete_instance2.gif

    After the host information has been validated, the Delete Instance: Review page appears.

  6. Review the information, and if correct, click Submit Job to continue. Otherwise, click Back and correct the information.

    A Confirmation page appears.

  7. Click View Job to view the status of the node deletion job.

    A Job Run detail page appears.

  8. Click your browser's Refresh button until the job shows a status of Succeeded or Failed.

    Description of delete_instance3.gif follows
    Description of the illustration delete_instance3.gif

    If the job shows a status of Failed, then you can click the name of the step that failed to view the reason for the failure.

  9. Click the Cluster tab to view to the Cluster Home page.

    The number of hosts available in the cluster database is reduced by one.

See Also:

Removing a Node From the Cluster

Removing a node from the cluster can be as easy as simply shutting down the server. If the node was not pinned and does not host any Oracle databases using Oracle Database 11g release 1 or earlier, then the node is automatically removed from the cluster when it is shut down. If the node was pinned or if it hosts a database instance from previous releases, then explicit deletion is needed.

To repurpose this server, you can restart the node with a different profile in place, or you can use other software to put a new software image on the server.

See Also: