This chapter describes the system configuration tasks that you must complete before you start Oracle Universal Installer (OUI) to install Oracle Grid Infrastructure for a cluster, and that you may need to complete if you intend to install Oracle Real Application Clusters (Oracle RAC) on the cluster.
This chapter contains the following topics:
Creating Groups, Users and Paths for Oracle Grid Infrastructure
Running the Rootpre.sh Script on x86-64 with Oracle Solaris Cluster
Configuring Grid Infrastructure Software Owner User Environments
Requirements for Creating an Oracle Grid Infrastructure Home Directory
Caution:
Always create a backup of existing databases before starting any configuration change.If you have an existing Oracle installation, then record the version numbers, patches, and other configuration information, and review upgrade procedures for your existing installation. Review Oracle upgrade documentation before proceeding with installation, to decide how you want to proceed.
You can upgrade Oracle Automatic Storage Management (Oracle ASM) 11g release 1 (11.1) without shutting down an Oracle RAC database by performing a rolling upgrade either of individual nodes, or of a set of nodes in the cluster. However, if you have a standalone database on a cluster that uses Oracle ASM, then you must shut down the standalone database before upgrading. If you are upgrading from Oracle ASM 10g, then you must shut down the entire Oracle ASM cluster to perform the upgrade.
If you have an existing Oracle ASM installation, then review Oracle upgrade documentation. The location of the Oracle ASM home changes in this release, and you may want to consider other configuration changes to simplify or customize storage administration. If you have an existing Oracle ASM home from a previous release, then it should be owned by the same user that you plan to use to upgrade Oracle Clusterware.
During rolling upgrades of the operating system, Oracle supports using different operating system binaries when both versions of the operating system are certified with the Oracle Database release you are using.
Note:
Using mixed operating system versions is only supported for the duration of an upgrade, over the period of a few hours. Oracle Clusterware does not support nodes that have processors with different instruction set architectures (ISAs) in the same cluster. Each node must be binary compatible with the other nodes in the cluster. For example, you cannot have one node using an Intel 64 processor and another node using an IA-64 (Itanium) processor in the same cluster. You could have one node using an Intel 64 processor and another node using an AMD64 processor in the same cluster because the processors use the same x86-64 ISA and run the same binary version of Oracle software.Your cluster can have nodes with CPUs of different speeds or sizes, but Oracle recommends that you use nodes with the same hardware configuration.
To find the most recent software updates, and to find best practices recommendations about preupgrade, postupgrade, compatibility, and interoperability, refer to "Oracle Upgrade Companion." "Oracle Upgrade Companion" is available through Note 785351.1 on My Oracle Support:
With Oracle Clusterware 11g release 2, Oracle Universal Installer (OUI) detects when the minimum requirements for an installation are not met, and creates shell scripts, called fixup scripts, to finish incomplete system configuration steps. If OUI detects an incomplete task, then it generates fixup scripts (runfixup.sh
). You can run the fixup script after you click the Fix and Check Again Button.
You also can have CVU generate fixup scripts before installation.
See Also:
Oracle Clusterware Administration and Deployment Guide for information about using thecluvfy
commandThe Fixup script does the following:
If necessary sets kernel parameters to values required for successful installation, including:
Shared memory parameters.
Open file descriptor and UDP send/receive parameters.
Sets permissions on the Oracle Inventory (central inventory) directory.
Reconfigures primary and secondary group memberships for the installation owner, if necessary, for the Oracle Inventory directory and the operating system privileges groups.
Sets shell limits if necessary to required values.
If you have SSH configured between cluster member nodes for the user account that you will use for installation, then you can check your cluster configuration before installation and generate a fixup script to make operating system changes before starting the installation.
To do this, log in as the user account that will perform the installation, navigate to the staging area where the runcluvfy command is located, and use the following command syntax, where node is a comma-delimited list of nodes you want to make cluster members:
$ ./runcluvfy.sh stage -pre crsinst -n node -fixup -verbose
For example, if you intend to configure a two-node cluster with nodes node1 and node2, enter the following command:
$ ./runcluvfy.sh stage -pre crsinst -n node1,node2 -fixup -verbose
During installation, you are required to perform tasks as root or as other users on remote terminals. Complete the following procedure for user accounts that you want to enable for remote display.
Note:
If you log in as another user (for example,oracle
), then repeat this procedure for that user as well.To enable remote display, complete one of the following procedures:
If you are installing the software from an X Window System workstation or X terminal, then:
Start a local terminal session, for example, an X terminal (xterm
).
If you are installing the software on another system and using the system as an X11 display, then enter a command using the following syntax to enable remote hosts to display X applications on the local X server:
# xhost + RemoteHost
where RemoteHost
is the fully qualified remote host name. For example:
# xhost + somehost.example.com somehost.example.com being added to the access control list
If you are not installing the software on the local system, then use the ssh
, command to connect to the system where you want to install the software:
# ssh -Y RemoteHost
where RemoteHost
is the fully qualified remote host name. The -Y
flag ("yes") enables remote X11 clients to have full access to the original X11 display.For example:
# ssh -Y somehost.example.com
If you are not logged in as the root
user, then enter the following command to switch the user to root
:
$ su - root password: #
If you are installing the software from a PC or other system with X server software installed, then:
Note:
If necessary, refer to your X server documentation for more information about completing this procedure. Depending on the X server software that you are using, you may need to complete the tasks in a different order.Start the X server software.
Configure the security settings of the X server software to permit remote hosts to display X applications on the local system.
Connect to the remote system where you want to install the software as the Oracle Grid Infrastructure for a cluster software owner (grid
, oracle
) and start a terminal session on that system, for example, an X terminal (xterm
).
Open another terminal on the remote system, and log in as the root
user on the remote system, so you can run scripts as root
when prompted.
Log in as root
, and use the following instructions to locate or create the Oracle Inventory group and a software owner for Oracle Grid Infrastructure.
Determining If the Oracle Inventory and Oracle Inventory Group Exists
Creating the Oracle Inventory Group If an Oracle Inventory Does Not Exist
Creating Job Role Separation Operating System Privileges Groups and Users
Note:
During an Oracle Grid Infrastructure installation, both Oracle Clusterware and Oracle Automatic Storage Management are installed. You no longer can have separate Oracle Clusterware installation owners and Oracle Automatic Storage Management installation owners.When you install Oracle software on the system for the first time, OUI creates the oraInst.loc
file. This file identifies the name of the Oracle Inventory group (by default, oinstall
), and the path of the Oracle Central Inventory directory. An oraInst.loc
file has contents similar to the following:
inventory_loc=central_inventory_location inst_group=group
In the preceding example, central_inventory_location
is the location of the Oracle central inventory, and group
is the name of the group that has permissions to write to the central inventory (the OINSTALL group privilege).
If you have an existing Oracle central inventory, then ensure that you use the same Oracle Inventory for all Oracle software installations, and ensure that all Oracle software users you intend to use for installation have permissions to write to this directory.
To determine if you have an Oracle central inventory directory (oraInventory
) on your system:
# more /var/opt/oracle/oraInst.loc
If the oraInst.loc
file exists, then the output from this command is similar to the following:
inventory_loc=/u01/app/oracle/oraInventory inst_group=oinstall
In the previous output example:
The inventory_loc
group shows the location of the Oracle Inventory
The inst_group
parameter shows the name of the Oracle Inventory group (in this example, oinstall
).
Use the command grep
groupname
/etc/group
to confirm that the group specified as the Oracle Inventory group still exists on the system. For example:
$ grep oinstall /etc/group oinstall:x:1000:grid,oracle
If the oraInst.loc
file does not exist, then create the Oracle Inventory group by entering a command similar to the following:
# /usr/sbin/groupadd -g 1000 oinstall
The preceding command creates the oraInventory group oinstall
, with the group ID number 1000. Members of the oraInventory group are granted privileges to write to the Oracle central inventory (oraInventory
).
By default, if an oraInventory group does not exist, then the installer lists the primary group of the installation owner for the Oracle Grid Infrastructure for a Cluster software as the oraInventory group. Ensure that this group is available as a primary group for all planned Oracle software installation owners.
Note:
Group and user IDs must be identical on all nodes in the cluster. Check to make sure that the group and user IDs you want to use are available on each cluster member node, and confirm that the primary group for each Oracle Grid Infrastructure for a Cluster installation owner has the same name and group ID.You must create a software owner for Oracle Grid Infrastructure in the following circumstances:
If an Oracle software owner user does not exist; for example, if this is the first installation of Oracle software on the system
If an Oracle software owner user exists, but you want to use a different operating system user, with different group membership, to separate Oracle Grid Infrastructure administrative privileges from Oracle Database administrative privileges.
In Oracle documentation, a user created to own only Oracle Grid Infrastructure software installations is called the grid
user. A user created to own either all Oracle installations, or only Oracle database installations, is called the oracle
user.
If you intend to use multiple Oracle software owners for different Oracle Database homes, then Oracle recommends that you create a separate software owner for Oracle Grid Infrastructure software (Oracle Clusterware and Oracle ASM), and use that owner to run the Oracle Grid Infrastructure installation.
If you plan to install Oracle Database or Oracle RAC, then Oracle recommends that you create separate users for the Oracle Grid Infrastructure and the Oracle Database installations. If you use one installation owner, then when you want to perform administration tasks, you must change the value for $ORACLE_HOME
to the instance you want to administer (Oracle ASM, in the Oracle Grid Infrastructure home, or the database in the Oracle home), using command syntax such as the following example, where /u01/app/11.2.0/grid
is the Oracle Grid Infrastructure home:
$ ORACLE_HOME=/u01/app/11.2.0/grid; export ORACLE_HOME
If you try to administer an instance using sqlplus
, lsnrctl
, or asmcmd
commands while $ORACLE_HOME
is set to a different binary path, then you will encounter errors. When starting srvctl from a database home, $ORACLE_HOME
should be set, or srvctl
fails. But if you are using srvctl
in the Oracle Grid Infrastructure home, then $ORACLE_HOME
is ignored, and the Oracle home path does not affect srvctl
commands. You always have to change $ORACLE_HOME
to the instance that you want to administer.
To create separate Oracle software owners to create separate users and separate operating system privileges groups for different Oracle software installations, note that each of these users must have the Oracle central inventory group (oraInventory group) as their primary group. Members of this group have write privileges to the Oracle central inventory (oraInventory
) directory, and are also granted permissions for various Oracle Clusterware resources, OCR keys, directories in the Oracle Clusterware home to which DBAs need write access, and other necessary privileges. In Oracle documentation, this group is represented as oinstall
in code examples.
Each Oracle software owner must be a member of the same central inventory group. Oracle recommends that you do not have more than one central inventory for Oracle installations. If an Oracle software owner has a different central inventory group, then you may corrupt the central inventory.
Caution:
For Oracle Grid Infrastructure for a Cluster installations, note the following restrictions for the Oracle Grid Infrastructure binary home (Grid home):It must not be placed under one of the Oracle base directories, including the Oracle base directory of the Oracle Grid Infrastructure installation owner.
It must not be placed in the home directory of an installation owner
During installation, ownership of the path to the Grid home is changed to root
. This change causes permission errors for other installations.
To determine whether an Oracle software owner user named oracle
or grid
exists, enter a command similar to the following (in this case, to determine if oracle
exists):
# id -a oracle
If the user exists, then the output from this command is similar to the following:
uid=501(oracle) gid=501(oinstall) groups=502(dba),503(oper)
Determine whether you want to use the existing user, or create another user. The user and group ID numbers must be the same on each node you intend to make a cluster member node.
To use the existing user, ensure that the user's primary group is the Oracle Inventory group (oinstall
). If this user account will be used for Oracle Database installations, then ensure that the Oracle account is also a member of the group you plan to designate as the OSDBA for Oracle ASM group (the group whose members are permitted to write to Oracle ASM storage).
If the Oracle software owner (oracle
, grid
) user does not exist, or if you require a new Oracle software owner user, then create it. If you want to use an existing user account, then modify it to ensure that the user ID and group IDs are the same on each cluster member node. The following procedures use grid
as the name of the Oracle software owner, and dba
as the OSASM group. To create separate system privilege groups to separate administration privileges, complete group creation before you create the user, as described in Section 2.4.5, "Creating Job Role Separation Operating System Privileges Groups and Users,".
To create a grid installation owner account where you have an existing system privileges group (in this example, dba
), whose members you want to have granted the SYSASM
privilege to administer the Oracle ASM instance, enter a command similar to the following:
# /usr/sbin/useradd -u 1100 -g oinstall -G dba grid
In the preceding command:
The -u
option specifies the user ID. Using this command flag is optional, as you can allow the system to provide you with an automatically generated user ID number. However, you must make note of the user ID number of the user you create for Oracle Grid Infrastructure, as you require it later during preinstallation, and you must have the same user ID number for this user on all nodes of the cluster.
The -g
option specifies the primary group, which must be the Oracle Inventory group. For example: oinstall
.
The -G option specified the secondary group, which in this example is dba
.
The secondary groups must include the OSASM group, whose members are granted the SYSASM
privilege to administer the Oracle ASM instance. You can designate a unique group for the SYSASM
system privileges, separate from database administrator groups, or you can designate one group as the OSASM and OSDBA group, so that members of that group are granted the SYSASM
and SYSDBA
privileges to grant system privileges to administer both the Oracle ASM instances and Oracle Database instances. In code examples, this group is asmadmin
.
If you are creating this user to own both Oracle Grid Infrastructure and an Oracle Database installation, then this user must have the OSDBA for ASM group as a secondary group. In code examples, this group name is asmdba
. Members of the OSDBA for ASM group are granted access to Oracle ASM storage. You must create an OSDBA for ASM group if you plan to have multiple databases accessing Oracle ASM storage, or you must use the same group as the OSDBA for all databases, and for the OSDBA for ASM group.
Use the usermod
command to change existing user id numbers and groups.
For example:
# id -a oracle uid=501(oracle) gid=501(oracle) groups=501(oracle) # /usr/sbin/usermod -u 1001 -g 1000 -G 1000,1001 oracle # id -a oracle uid=1001(oracle) gid=1000(oinstall) groups=1000(oinstall),1001(oracle)
Set the password of the user that will own Oracle Grid Infrastructure. For example:
# passwd grid
Repeat this procedure on all of the other nodes in the cluster.
Note:
If necessary, contact your system administrator before using or modifying an existing user.Oracle recommends that you do not use the UID and GID defaults on each node, as group and user IDs likely will be different on each node. Instead, provide common assigned group and user IDs, and confirm that they are unused on any node before you create or modify groups and users.
The Oracle base directory for the grid installation owner is the location where diagnostic and administrative logs, and other logs associated with Oracle ASM and Oracle Clusterware are stored.
If you have created a path for the Oracle Clusterware home that is compliant with Oracle Optimal Flexible Architecture (OFA) guidelines for Oracle software paths then you do not need to create an Oracle base directory. When OUI finds an OFA-compliant path, it creates the Oracle base directory in that path.
For OUI to recognize the path as an Oracle software path, it must be in the form u[00-99]/app, and it must be writable by any member of the oraInventory (oinstall
) group. The OFA path for the Oracle base is /u01/app/
user
, where user
is the name of the software installation owner.
Oracle recommends that you create an Oracle Grid Infrastructure Grid home and Oracle base homes manually, particularly if you have separate Oracle Grid Infrastructure for a cluster and Oracle Database software owners, so that you can separate log files.
For example:
# mkdir -p /u01/app/11.2.0/grid # mkdir -p /u01/app/grid # mkdir -p /u01/app/oracle # chown grid:oinstall /u01/app/11.2.0/grid # chown grid:oinstall /u01/app/grid # chown oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/ # chown -R grid:oinstall /u01
Note:
Placing Oracle Grid Infrastructure for a Cluster binaries on a cluster file system is not supported.A job role separation privileges configuration of Oracle ASM is a configuration with groups and users that divide administrative access privileges to the Oracle ASM installation from other administrative privileges users and groups associated with other Oracle installations. Administrative privileges access is granted by membership in separate operating system groups, and installation privileges are granted by using different installation owners for each Oracle installation.
Note:
This configuration is optional, to restrict user access to Oracle software by responsibility areas for different administrator users.If you prefer, you can allocate operating system user privileges so that you can use one administrative user and one group for operating system authentication for all system privileges on the storage and database tiers.
For example, you can designate the oracle
user to be the installation owner for all Oracle software, and designate oinstall
to be the group whose members are granted all system privileges for Oracle Clusterware, Oracle ASM, and all Oracle Databases on the servers, and all privileges as installation owners. This group must also be the Oracle Inventory group.
Oracle recommends that you use at least two groups: A system privileges group whose members are granted administrative system privileges, and an installation owner group (the oraInventory group) to provide separate installation privileges (the OINSTALL
privilege). To simplify using the defaults for Oracle tools such as Cluster Verification Utility, if you do choose to use a single operating system group to grant all system privileges and the right to write to the oraInventory, then that group name should be oinstall
.
Note:
To use a directory service, such as Network Information Services (NIS), refer to your operating system documentation for further information.This section provides an overview of how to create users and groups to use job role separation. Log in as root
to create these groups and users.
Oracle recommends that you create the following operating system groups and users for all installations where you create separate software installation owners:
One software owner to own each Oracle software product (typically, oracle
, for the database software owner user, and grid
for Oracle Grid Infrastructure.
You must create at least one software owner the first time you install Oracle software on the system. This user owns the Oracle binaries of the Oracle Grid Infrastructure software, and you can also make this user the owner of the Oracle Database or Oracle RAC binaries.
Oracle software owners must have the Oracle Inventory group as their primary group, so that each Oracle software installation owner can write to the central inventory (oraInventory
), and so that OCR and Oracle Clusterware resource permissions are set correctly. The database software owner must also have the OSDBA group and (if you create it) the OSOPER group as secondary groups. In Oracle documentation, when Oracle software owner users are referred to, they are called oracle
users.
Oracle recommends that you create separate software owner users to own each Oracle software installation. Oracle particularly recommends that you do this if you intend to install multiple databases on the system.
In Oracle documentation, a user created to own the Oracle Grid Infrastructure binaries is called the grid
user. This user owns both the Oracle Clusterware and Oracle Automatic Storage Management binaries.
See Also:
Oracle Clusterware Administration and Deployment Guide and Oracle Database Administrator's Guide for more information about the OSDBA, OSASM and OSOPER groups and the SYSDBA, SYSASM and SYSOPER privilegesThe following operating system groups and user are required if you are installing Oracle Database:
The OSDBA group (typically, dba
)
You must create this group the first time you install Oracle Database software on the system. This group identifies operating system user accounts that have database administrative privileges (the SYSDBA
privilege). If you do not create separate OSDBA, OSOPER and OSASM groups for the Oracle ASM instance, then operating system user accounts that have the SYSOPER
and SYSASM
privileges must be members of this group. The name used for this group in Oracle code examples is dba
. If you do not designate a separate group as the OSASM group, then the OSDBA group you define is also by default the OSASM group.
To specify a group name other than the default dba
group, then you must choose the Advanced installation type to install the software or start Oracle Universal Installer (OUI) as a user that is not a member of this group. In this case, OUI prompts you to specify the name of this group.
Members of the OSDBA group formerly were granted SYSASM
privileges on Oracle ASM instances, including mounting and dismounting disk groups. This privileges grant is removed with Oracle Grid Infrastructure 11g release 2, if different operating system groups are designated as the OSDBA and OSASM groups. If the same group is used for both OSDBA and OSASM, then the privilege is retained.
The OSOPER group for Oracle Database (typically, oper
)
This is an optional group. Create this group if you want a separate group of operating system users to have a limited set of database administrative privileges (the SYSOPER
privilege). By default, members of the OSDBA group also have all privileges granted by the SYSOPER
privilege.
To use the OSOPER group to create a database administrator group with fewer privileges than the default dba
group, then you must choose the Advanced installation type to install the software or start OUI as a user that is not a member of the dba
group. In this case, OUI prompts you to specify the name of this group. The usual name chosen for this group is oper
.
SYSASM is a new system privilege that enables the separation of the Oracle ASM storage administration privilege from SYSDBA. With Oracle Automatic Storage Management 11g release 2 (11.2), members of the database OSDBA group are not granted SYSASM privileges, unless the operating system group designated as the OSASM group is the same group designated as the OSDBA group.
Select separate operating system groups as the operating system authentication groups for privileges on Oracle ASM. Before you start OUI, create the following groups and users for Oracle ASM:
The Oracle Automatic Storage Management Group (typically asmadmin
)
This is a required group. Create this group as a separate group if you want to have separate administration privilege groups for Oracle ASM and Oracle Database administrators. In Oracle documentation, the operating system group whose members are granted privileges is called the OSASM group, and in code examples, where there is a group specifically created to grant this privilege, it is referred to as asmadmin
.
If you have multiple databases on your system, and use multiple OSDBA groups so that you can provide separate SYSDBA privileges for each database, then you should create a separate OSASM group, and use a separate user from the database users to own the Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle ASM). Oracle ASM can support multiple databases.
Members of the OSASM group can use SQL to connect to an Oracle ASM instance as SYSASM using operating system authentication. The SYSASM privileges permit mounting and dismounting disk groups, and other storage administration tasks. SYSASM privileges provide no access privileges on an RDBMS instance.
The ASM Database Administrator group (OSDBA for ASM, typically asmdba
)
Members of the ASM Database Administrator group (OSDBA for ASM) are granted read and write access to files managed by Oracle ASM. The Oracle Grid Infrastructure installation owner and all Oracle Database software owners must be a member of this group, and all users with OSDBA membership on databases that have access to the files managed by Oracle ASM must be members of the OSDBA group for ASM.
Members of the ASM Operator Group (OSOPER for ASM, typically asmoper
)
This is an optional group. Create this group if you want a separate group of operating system users to have a limited set of Oracle ASM instance administrative privileges (the SYSOPER for ASM privilege), including starting up and stopping the Oracle ASM instance. By default, members of the OSASM group also have all privileges granted by the SYSOPER for ASM privilege.
To use the Oracle ASM Operator group to create an Oracle ASM administrator group with fewer privileges than the default asmadmin
group, then you must choose the Advanced installation type to install the software, In this case, OUI prompts you to specify the name of this group. In code examples, this group is asmoper
.
The following sections describe how to create the required operating system user and groups:.
Creating the OSDBA Group to Prepare for Database Installations
Creating the OSDBA for ASM Group for Database Access to Oracle ASM
Creating Identical Database Users and Groups on Other Cluster Nodes
If you intend to install Oracle Database to use with the Oracle Grid Infrastructure installation, then you must create an OSDBA group in the following circumstances:
An OSDBA group does not exist; for example, if this is the first installation of Oracle Database software on the system
An OSDBA group exists, but you want to give a different group of operating system users database administrative privileges for a new Oracle Database installation
If the OSDBA group does not exist, or if you require a new OSDBA group, then create it as follows. Use the group name dba
unless a group with that name already exists:
# /usr/sbin/groupadd -g 1031 dba
Create an OSOPER group only if you want to identify a group of operating system users with a limited set of database administrative privileges (SYSOPER operator privileges). For most installations, it is sufficient to create only the OSDBA group. To use an OSOPER group, then you must create it in the following circumstances:
If an OSOPER group does not exist; for example, if this is the first installation of Oracle Database software on the system
If an OSOPER group exists, but you want to give a different group of operating system users database operator privileges in a new Oracle installation
If you require a new OSOPER group, then create it as follows. Use the group name oper
unless a group with that name already exists.
# /usr/sbin/groupadd -g 1032 oper1
If the OSASM group does not exist or if you require a new OSASM group, then create it as follows. Use the group name asmadmin
unless a group with that name already exists:
# /usr/sbin/groupadd -g 1020 asmadmin
Create an OSOPER for ASM group if you want to identify a group of operating system users, such as database administrators, whom you want to grant a limited set of Oracle ASM storage tier administrative privileges, including the ability to start up and shut down the Oracle ASM storage. For most installations, it is sufficient to create only the OSASM group, and provide that group as the OSOPER for ASM group during the installation interview.
If you require a new OSOPER for ASM group, then create it as follows. In the following, use the group name asmoper
unless a group with that name already exists:
# /usr/sbin/groupadd -g 1022 asmoper
You must create an OSDBA for ASM group to provide access to the Oracle ASM instance. This is necessary if OSASM and OSDBA are different groups.
If the OSDBA for ASM group does not exist or if you require a new OSDBA for ASM group, then create it as follows. Use the group name asmdba
unless a group with that name already exists:
# /usr/sbin/groupadd -g 1021 asmdba
You must create an Oracle software owner user in the following circumstances:
If an Oracle software owner user exists, but you want to use a different operating system user, with different group membership, to give database administrative privileges to those groups in a new Oracle Database installation.
If you have created an Oracle software owner for Oracle Grid Infrastructure, such as grid
, and you want to create a separate Oracle software owner for Oracle Database software, such as oracle
.
To determine whether an Oracle software owner user named oracle
or grid
exists, enter a command similar to the following (in this case, to determine if oracle
exists):
# id -a oracle
If the user exists, then the output from this command is similar to the following:
uid=501(oracle) gid=501(oinstall) groups=502(dba),503(oper)
Determine whether you want to use the existing user, or create another user. To use the existing user, ensure that the user's primary group is the Oracle Inventory group and that it is a member of the appropriate OSDBA and OSOPER groups. Refer to one of the following sections for more information:
To modify an existing user, refer to the "Modifying an Existing Oracle Software Owner User" section.
To create a user, refer to the following section.
Note:
If necessary, contact your system administrator before using or modifying an existing user.Oracle recommends that you do not use the UID and GID defaults on each node, as group and user IDs likely will be different on each node. Instead, provide common assigned group and user IDs, and confirm that they are unused on any node before you create or modify groups and users.
If the Oracle software owner user does not exist, or if you require a new Oracle software owner user, then create it as follows. Use the user name oracle
unless a user with that name already exists.
To create an oracle
user, enter a command similar to the following:
# /usr/sbin/useradd -u 1101 -g oinstall -G dba,asmdba oracle
In the preceding command:
The -u option specifies the user ID. Using this command flag is optional, as you can allow the system to provide you with an automatically generated user ID number. However, you must make note of the oracle
user ID number, as you require it later during preinstallation.
The -g
option specifies the primary group, which must be the Oracle Inventory group--for example, oinstall
The -G
option specifies the secondary groups, which must include the OSDBA group, the OSDBA for ASM group, and, if required, the OSOPER for ASM group. For example: dba
, asmdba
, or dba
, asmdba
, asmoper
Set the password of the oracle
user:
# passwd oracle
If the oracle
user exists, but its primary group is not oinstall
, or it is not a member of the appropriate OSDBA or OSDBA for ASM groups, then enter a command similar to the following to modify it. Specify the primary group using the -g
option and any required secondary group using the -G
option:
# /usr/sbin/usermod -g oinstall -G dba,asmdba oracle
Repeat this procedure on all of the other nodes in the cluster.
Oracle software owner users and the Oracle Inventory, OSDBA, and OSOPER groups must exist and be identical on all cluster nodes. To create these identical users and groups, you must identify the user ID and group IDs assigned them on the node where you created them, and then create the user and groups with the same name and ID on the other cluster nodes.
Note:
You must complete the following procedures only if you are using local users and groups. If you are using users and groups defined in a directory service such as NIS, then they are already identical on each cluster node.Identifying Existing User and Group IDs
To determine the user ID (uid
) of the grid
or oracle
users, and the group IDs (gid
) of the existing Oracle groups, follow these steps:
Enter a command similar to the following (in this case, to determine a user ID for the oracle
user):
# id -a oracle
The output from this command is similar to the following:
uid=502(oracle) gid=501(oinstall) groups=502(dba),503(oper),506(asmdba)
From the output, identify the user ID (uid
) for the user and the group identities (gid
) for the groups to which it belongs. Ensure that these ID numbers are identical on each node of the cluster. The user's primary group is listed after gid
. Secondary groups are listed after groups
.
Creating Users and Groups on the Other Cluster Nodes
To create users and groups on the other cluster nodes, repeat the following procedure on each node:
Log in to the next cluster node as root
.
Enter commands similar to the following to create the oinstall
, asmadmin
, and asmdba
groups, and if required, the asmoper
, dba
, and oper
groups. Use the -g
option to specify the correct gid
for each group.
# /usr/sbin/groupadd -g 1000 oinstall # /usr/sbin/groupadd -g 1020 asmadmin # /usr/sbin/groupadd -g 1021 asmdba # /usr/sbin/groupadd -g 1022 asmoper # /usr/sbin/groupadd -g 1031 dba # /usr/sbin/groupadd -g 1032 oper
Note:
If the group already exists, then use thegroupmod
command to modify it if necessary. If you cannot use the same group ID for a particular group on this node, then view the /etc/group
file on all nodes to identify a group ID that is available on every node. You must then change the group ID on all nodes to the same group ID.To create the oracle
or Oracle Grid Infrastructure (grid
) user, enter a command similar to the following (in this example, to create the oracle
user):
# /usr/sbin/useradd -u 1101 -g oinstall -G asmdba,dba oracle
In the preceding command:
The -u
option specifies the user ID, which must be the user ID that you identified in the previous subsection
The -g
option specifies the primary group, which must be the Oracle Inventory group, for example oinstall
The -G
option specifies the secondary groups, which can include the OSASM, OSDBA, OSDBA for ASM, and OSOPER or OSOPER for ASM groups. For example:
A grid installation owner: OSASM (asmadmin
), whose members are granted the SYSASM privilege.
An Oracle Database installation owner without SYSASM privileges access: OSDBA (dba
), OSDBA for ASM (asmdba
), OSOPER for ASM (asmoper
).
Note:
If the user already exists, then use theusermod
command to modify it if necessary. If you cannot use the same user ID for the user on every node, then view the /etc/passwd
file on all nodes to identify a user ID that is available on every node. You must then specify that ID for the user on all of the nodes.Set the password of the user. For example:
# passwd oracle
Complete user environment configuration tasks for each user as described in the section Configuring Grid Infrastructure Software Owner User Environments.
The following is an example of how to create the Oracle Inventory group (oinstall
), and a single group (dba
) as the OSDBA, OSASM and OSDBA for Oracle ASM groups. In addition, it shows how to create the Oracle Grid Infrastructure software owner (grid
), and one Oracle Database owner (oracle
) with correct group memberships. This example also shows how to configure an Oracle base path compliant with OFA structure with correct permissions:
# groupadd -g 1000 oinstall # groupadd -g 1031 dba # useradd -u 1100 -g oinstall -G dba grid # useradd -u 1101 -g oinstall -G dba oracle # mkdir -p /u01/app/11.2.0/grid # mkdir -p /u01/app/grid # chown -R grid:oinstall /u01 # mkdir /u01/app/oracle # chown oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/
After running these commands, you have the following groups and users:
An Oracle central inventory group, or oraInventory group (oinstall
). Members who have the central inventory group as their primary group, are granted the OINSTALL permission to write to the oraInventory
directory.
A single system privileges group that is used as the OSASM, OSDBA, OSDBA for ASM, and OSOPER for ASM group (dba
), whose members are granted the SYSASM and SYSDBA privilege to administer Oracle Clusterware, Oracle ASM, and Oracle Database, and are granted SYSASM and OSOPER for ASM access to the Oracle ASM storage.
An Oracle grid installation for a cluster owner (grid
), with the oraInventory group as its primary group, and with the OSASM group as the secondary group, with its Oracle base directory /u01/app/grid
.
An Oracle Database owner (oracle
) with the oraInventory group as its primary group, and the OSDBA group as its secondary group, with its Oracle base directory /u01/app/oracle
.
/u01/app
owned by grid:oinstall
with 775 permissions before installation, and by root after the root.sh script is run during installation. This ownership and permissions enables OUI to create the Oracle Inventory directory, in the path /u01/app/oraInventory
.
/u01
owned by grid:oinstall
before installation, and by root
after the root.sh
script is run during installation.
/u01/app/11.2.0/grid
owned by grid:oinstall
with 775 permissions. These permissions are required for installation, and are changed during the installation process.
/u01/app/grid
owned by grid:oinstall with 775 permissions before installation, and 755 permissions after installation.
/u01/app/oracle
owned by oracle:oinstall
with 775 permissions.
The following is an example of how to create role-allocated groups and users that is compliant with an Optimal Flexible Architecture (OFA) deployment:
# groupadd -g 1000 oinstall # groupadd -g 1020 asmadmin # groupadd -g 1021 asmdba # groupadd -g 1031 dba1 # groupadd -g 1041 dba2 # groupadd -g 1022 asmoper # useradd -u 1100 -g oinstall -G asmadmin,asmdba grid # useradd -u 1101 -g oinstall -G dba1,asmdba oracle1 # useradd -u 1102 -g oinstall -G dba2,asmdba oracle2 # mkdir -p /u01/app/11.2.0/grid # mkdir -p /u01/app/grid # chown -R grid:oinstall /u01 # mkdir -p /u01/app/oracle1 # chown oracle1:oinstall /u01/app/oracle1 # mkdir -p /u01/app/oracle2 # chown oracle2:oinstall /u01/app/oracle2 # chmod -R 775 /u01
After running these commands, you have the following groups and users:
An Oracle central inventory group, or oraInventory group (oinstall
), whose members that have this group as their primary group are granted permissions to write to the oraInventory
directory.
A separate OSASM group (asmadmin
), whose members are granted the SYSASM privilege to administer Oracle Clusterware and Oracle ASM.
A separate OSDBA for ASM group (asmdba
), whose members include grid
, oracle1
and oracle2
, and who are granted access to Oracle ASM.
A separate OSOPER for ASM group (asmoper
), whose members are granted limited Oracle ASM administrator privileges, including the permissions to start and stop the Oracle ASM instance.
An Oracle grid installation for a cluster owner (grid
), with the oraInventory group as its primary group, and with the OSASM (asmadmin
), OSDBA for ASM (asmdba) group as a secondary group.
Two separate OSDBA groups for two different databases (dba1
and dba2
) to establish separate SYSDBA privileges for each database.
Two Oracle Database software owners (oracle1
and oracle2
), to divide ownership of the Oracle database binaries, with the OraInventory group as their primary group, and the OSDBA group for their database (dba1
or dba2
) and the OSDBA for ASM group (asmdba
) as their secondary groups.
An OFA-compliant mount point /u01
owned by grid:oinstall
before installation.
An Oracle base for the grid installation owner /u01/app/grid
owned by grid:oinstall
with 775 permissions, and changed during the installation process to 755 permissions.
An Oracle base /u01/app/oracle
1 owned by oracle1:oinstall
with 775 permissions.
An Oracle base /u01/app/oracle
2 owned by oracle2:oinstall
with 775 permissions.
A Grid home /u01/app/11.2.0/grid
owned by grid:oinstall
with 775 (drwxdrwxr-x
) permissions. These permissions are required for installation, and are changed during the installation process to root:oinstall
with 755 permissions (drwxr-xr-x
).
/u01/app/oraInventory
. This path remains owned by grid:oinstall
, to enable other Oracle software owners to write to the central inventory.
Ensure servers run the same operating system binary. Oracle Grid Infrastructure installations and Oracle Real Application Clusters (Oracle RAC) support servers with different hardware in the same cluster.
Each system must meet the following minimum hardware requirements:
At least 2.5 GB of RAM for Oracle Grid Infrastructure for a Cluster installations, including installations where you plan to install Oracle RAC.
At least 1024 x 768 display resolution, so that OUI displays correctly
Swap space equivalent to the multiple of the available RAM, as indicated in the following table:
Table 2-1 Swap Space Required as a Multiple of RAM
Available RAM | Swap Space Required |
---|---|
Between 2.5 GB and 16 GB |
Equal to the size of RAM |
More than 16 GB |
16 GB |
Note:
On Oracle Solaris, if you use non-swappable memory, such as ISM, then you should deduct the memory allocated to this space from the available RAM before calculating swap space. If you plan to install Oracle Database or Oracle RAC on systems using DISM, then available swap space must be at least equal to the sum of the SGA sizes of all instances running on the servers.1 GB of space in the /tmp
directory
6.6 GB of space for the Oracle Grid Infrastructure for a Cluster home (Grid home) This includes Oracle Clusterware, Oracle Automatic Storage Management (Oracle ASM), and Oracle ACFS files and log files, and includes the Cluster Health Monitor repository.
Upto 10 GB of additional space in the Oracle base directory of the Grid Infrastructure owner for diagnostic collections generated by Trace File Analyser and Collector.
Note:
If you intend to install Oracle Databases or an Oracle RAC database on the cluster, be aware that the size of the/dev/shm
mount area on each server must be greater than the system global areal (SGA) and the program global area (PGA) of the databases on the servers. Review expected SGA and PGA sizes with database administrators, to ensure that you do not have to increase /dev/shm
after databases are installed on the cluster.If you are installing Oracle Database, then you require additional space, either on a file system or in an Oracle Automatic Storage Management disk group, for the Fast Recovery Area if you choose to configure automated database backups.
To ensure that each system meets these requirements:
To determine the available RAM and swap space, enter the following command to obtain the system activity report:
# sar -r n i
If the size of the physical RAM installed in the system is less than the required size, then you must install more memory before continuing.
To determine the size of the configured swap space, enter the following command:
# /usr/sbin/swap -s
Note:
Oracle recommends that you take multiple values for the available RAM and swap space before finalizing a value. This is because the available RAM and swap space keep changing depending on the user interactions with the computer.To determine the amount of space available in the /tmp
directory, enter the following command:
# df -k /tmp
This command displays disk space in 1 kilobyte blocks. On most systems, you can use the df
command with the -h
flag (df -h
) to display output in "human-readable" format, such as "24G" and "10M." If there is less than 1 GB of disk space available in the /tmp
directory (less than 1048576 1-k blocks), then complete one of the following steps:
Delete unnecessary files from the /tmp
directory to make available the space required.
Set the TEMP
and TMPDIR
environment variables when setting the oracle
user's environment (described later).
Extend the file system that contains the /tmp
directory. If necessary, contact your system administrator for information about extending file systems.
To determine the amount of free disk space on the system, enter the following command:
# df -k /tmp
To determine if the system architecture can run the Oracle software, enter the following command:
# /bin/isainfo -kv
Note:
The following is the expected output of this command:64-bit SPARC installation:
64-bit sparcv9 kernel modules
64-bit x86 installation:
64-bit amd64 kernel modules
Ensure that the Oracle software you have is the correct Oracle software for your processor type.
If the output of this command indicates that your system architecture does not match the system for which the Oracle software you have is written, then you cannot install the software. Obtain the correct software for your system architecture before proceeding further.
Review the following sections to check that you have the networking hardware and internet protocol (IP) addresses required for an Oracle Grid Infrastructure for a cluster installation:
Broadcast Requirements for Networks Used by Oracle Grid Infrastructure
Multicast Requirements for Networks Used by Oracle Grid Infrastructure
DNS Configuration for Domain Delegation to Grid Naming Service
Note:
For the most up-to-date information about supported network protocols and hardware for Oracle RAC installations, refer to the Certify pages on the My Oracle Support Web site at the following URL:https://support.oracle.com
The following is a list of requirements for network configuration:
Each node must have at least two network adapters or network interface cards (NICs): one for the public network interface, and one for the private network interface (the interconnect). For Solaris 11 and higher, the network adapter is a logical device and not a physical device.
With Redundant Interconnect Usage, you should identify multiple interfaces to use for the cluster private network, without the need of using bonding or other technologies. This functionality is available starting with Oracle Database 11g Release 2 (11.2.0.2).
When you define multiple interfaces, Oracle Clusterware creates from one to four highly available IP (HAIP) addresses. Oracle RAC and Oracle ASM instances use these interface addresses to ensure highly available, load-balanced interface communication between nodes. The installer enables Redundant Interconnect Usage to provide a high availability private network.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about using OIFCFG to modify interfacesBy default, Oracle Grid Infrastructure software uses all of the HAIP addresses for private network communication, providing load-balancing across the set of interfaces you identify for the private network. If a private interconnect interface fails or become non-communicative, then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces.
Note:
If you define more than four interfaces as private network interfaces, be aware that Oracle Clusterware activates only four of the interfaces at a time. However, if one of the four active interfaces fails, then Oracle Clusterware transitions the HAIP addresses configured to the failed interface to one of the reserve interfaces in the defined set of private interfaces.Note:
If you are installing Oracle Clusterware on Oracle Solaris Cluster, then you should select the Oracle Solaris Cluster virtual network interfaceclprivnet0
as the clusterware private network address.
On Oracle Solaris, if you use IP network multipathing (IPMP) to aggregate multiple interfaces for the public or the private networks, then during installation of Oracle Grid Infrastructure, ensure you identify all interface names aggregated into an IPMP group as interfaces that should be used for the public or private network.
When you upgrade a node to Oracle Grid Infrastructure 11g release 2 (11.2.0.2) and later, the upgraded system uses your existing network classifications. After you complete the upgrade, you can enable Redundant Interconnect Usage by selecting multiple interfaces for the private network with OIFCFG.
Oracle recommends that you use the Redundant Interconnect Usage feature to make use of multiple interfaces for the private network. However, you can also use third-party technologies to provide redundancy for the private network.
If you install Oracle Clusterware using OUI, then the public interface names associated with the network adapters for each network must be the same on all nodes, and the private interface names associated with the network adaptors should be the same on all nodes. This restriction does not apply if you use cloning, either to create a new cluster, or to add nodes to an existing cluster.
For example: With a two-node cluster, you cannot configure network adapters on node1
with eth0
as the public interface, but on node2
have eth1
as the public interface. Public interface names must be the same, so you must configure eth0
as public on both nodes. You should configure the private interfaces on the same network adapters as well. If eth1
is the private interface for node1
, then eth1
should be the private interface for node2
.
See Also:
Oracle Clusterware Administration and Deployment Guide for information about how to add nodes using cloningFor the public network, each network adapter must support TCP/IP.
For the private network, the interface must support the user datagram protocol (UDP) using high-speed network adapters and switches that support TCP/IP (minimum requirement 1 Gigabit Ethernet).
Note:
UDP is the default interface protocol for Oracle RAC, and TCP is the interconnect protocol for Oracle Clusterware. You must use a switch for the interconnect. Oracle recommends that you use a dedicated switch.Oracle does not support token-rings or crossover cables for the interconnect.
Each node's private interface for interconnects must be on the same subnet, and those subnets must connect to every node of the cluster. For example, if the private interfaces have a subnet mask of 255.255.255.0, then your private network is in the range 192.168.0.0--192.168.0.255, and your private addresses must be in the range of 192.168.0.[0-255]. If the private interfaces have a subnet mask of 255.255.0.0, then your private addresses can be in the range of 192.168.[0-255].[0-255].
For clusters using Redundant Interconnect Usage, each private interface should be on a different subnet. However, each cluster member node must have an interface on each private interconnect subnet, and these subnets must connect to every node of the cluster. For example, you can have private networks on subnets 192.168.0 and 10.0.0, but each cluster member node must have an interface connected to the 192.168.0 and 10.0.0 subnets.
For the private network, the endpoints of all designated interconnect interfaces must be completely reachable on the network. There should be no node that is not connected to every private network interface. You can test if an interconnect interface is reachable using ping
.
Before starting the installation, you must have at least two interfaces configured on each node: One for the private IP address and one for the public IP address.
You can configure IP addresses with one of the following options:
Dynamic IP address assignment using Oracle Grid Naming Service (GNS). If you select this option, then network administrators assign static IP address for the physical host name and dynamically allocated IPs for the Oracle Clusterware managed VIP addresses. In this case, IP addresses for the VIPs are assigned by a DHCP and resolved using a multicast domain name server configured as part of Oracle Clusterware within the cluster. If you plan to use GNS, then you must have the following:
A DHCP service running on the public network for the cluster
Enough addresses on the DHCP to provide 1 IP address for each node's virtual IP, and 3 IP addresses for the cluster used by the Single Client Access Name (SCAN) for the cluster
Static IP address assignment. If you select this option, then network administrators assign a fixed IP address for each physical host name in the cluster and for IPs for the Oracle Clusterware managed VIPs. In addition, domain name server (DNS) based static name resolution is used for each node. Selecting this option requires that you request network administration updates when you modify the cluster.
Note:
Oracle recommends that you use a static host name for all server node public hostnames.Public IP addresses and virtual IP addresses must be in the same subnet.
Oracle only supports DHCP-assigned networks for the default network, not for any subsequent networks.
If you enable Grid Naming Service (GNS), then name resolution requests to the cluster are delegated to the GNS, which is listening on the GNS virtual IP address. You define this address in the DNS domain before installation. The DNS must be configured to delegate resolution requests for cluster names (any names in the subdomain delegated to the cluster) to the GNS. When a request comes to the domain, GNS processes the requests and responds with the appropriate addresses for the name requested.
To use GNS, before installation the DNS administrator must establish DNS Lookup to direct DNS resolution of a subdomain to the cluster. If you enable GNS, then you must have a DHCP service on the public network that allows the cluster to dynamically allocate the virtual IP addresses as required by the cluster.
Note:
The following restrictions apply to vendor configurations on your system:If you have vendor clusterware installed, then you cannot choose to use GNS, because the vendor clusterware does not support it.
You cannot use GNS with another multicast DNS. If you want to use GNS, then disable any third party mDNS daemons on your system.
If you do not enable GNS, then the public and virtual IP addresses for each node must be static IP addresses, configured before installation for each node, but not currently in use. Public and virtual IP addresses must be on the same subnet.
Oracle Clusterware manages private IP addresses in the private subnet on interfaces you identify as private during the installation interview.
The cluster must have the following addresses configured:
A public IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, and resolvable to that node before installation
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
A virtual IP address for each node, with the following characteristics:
Static IP address
Configured before installation for each node, but not currently in use
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
A Single Client Access Name (SCAN) for the cluster, with the following characteristics:
Three Static IP addresses configured on the domain name server (DNS) before installation so that the three IP addresses are associated with the name provided as the SCAN, and all three addresses are returned in random order by the DNS to the requestor
Configured before installation in the DNS to resolve to addresses that are not currently in use
Given a name that does not begin with a numeral
On the same subnet as all other public IP addresses, VIP addresses, and SCAN addresses
Conforms with the RFC 952 standard, which allows alphanumeric characters and hyphens ("-"), but does not allow underscores ("_").
A private IP address for each node, with the following characteristics:
Static IP address
Configured before installation, but on a separate, private network, with its own subnet, that is not resolvable except by other cluster member nodes
The SCAN is a name used to provide service access for clients to the cluster. Because the SCAN is associated with the cluster as a whole, rather than to a particular node, the SCAN makes it possible to add or remove nodes from the cluster without needing to reconfigure clients. It also adds location independence for the databases, so that client configuration does not have to depend on which nodes are running a particular database. Clients can continue to access the cluster in the same way as with previous releases, but Oracle recommends that clients accessing the cluster use the SCAN.
Note:
In a Typical installation, the SCAN you provide is also the name of the cluster. In an advanced installation, The SCAN and cluster name are entered in separate fields during installation.Both the SCAN and the cluster name must be at least one character long and no more than 15 characters in length, must be alphanumeric, cannot begin with a numeral, and may contain hyphens (-).
You can use the nslookup command to confirm that the DNS is correctly associating the SCAN with the addresses. For example:
root@node1]$ nslookup mycluster-scan Server: dns.example.com Address: 192.0.2.001 Name: mycluster-scan.example.com Address: 192.0.2.201 Name: mycluster-scan.example.com Address: 192.0.2.202 Name: mycluster-scan.example.com Address: 192.0.2.203
After installation, when a client sends a request to the cluster, the Oracle Clusterware SCAN listeners redirect client requests to servers in the cluster.
Note:
Oracle strongly recommends that you do not configure SCAN VIP addresses in the hosts file. Use DNS resolution for SCAN VIPs. If you use the hosts file to resolve SCANs, then you will only be able to resolve to one IP address and you will have only one SCAN address.Configuring SCANs in a DNS or a hosts file is the only supported configuration. Configuring SCANs in a Network Information Service (NIS) is not supported.
See Also:
Appendix C, "Understanding Network Addresses" for more information about network addressesBroadcast communications (ARP and UDP) must work properly across all the public and private interfaces configured for use by Oracle Grid Infrastructure release 2 patchset 1 (11.2.0.2) and later releases.
The broadcast must work across any configured VLANs as used by the public or private interfaces.
With Oracle Grid Infrastructure release 2 (11.2), on each cluster member node, the Oracle mDNS daemon uses multicasting on all interfaces to communicate with other nodes in the cluster.
With Oracle Grid Infrastructure release 2 patchset 1 (11.2.0.2) and later releases, multicasting is required on the private interconnect. For this reason, at a minimum, you must enable multicasting for the cluster:
Across the broadcast domain as defined for the private interconnect
On the IP address subnet ranges 224.0.0.0/24 and 230.0.1.0/24
You do not need to enable multicast communications across routers.
If you plan to use GNS, then before Oracle Grid Infrastructure installation, you must configure your domain name server (DNS) to send to GNS name resolution requests for the subdomain GNS serves, which are the cluster member nodes. The following is an overview of what needs to be done for domain delegation. Your actual procedure may be different from this example.
Configure the DNS to send GNS name resolution requests using delegation:
In the DNS, create an entry for the GNS virtual IP address, where the address uses the form gns-server.CLUSTERNAME.DOMAINNAME. For example, where the cluster name is mycluster
, and the domain name is example.com
, and the IP address is 192.0.2.1, create an entry similar to the following:
mycluster-gns.example.com A 192.0.2.1
The address you provide must be routable.
Set up forwarding of the GNS subdomain to the GNS virtual IP address, so that GNS resolves addresses to the GNS subdomain. To do this, create a BIND configuration entry similar to the following for the delegated domain, where cluster01.example.com
is the subdomain you want to delegate:
cluster01.example.com NS mycluster-gns.example.com
When using GNS, you must configure resolve.conf
on the nodes in the cluster (or the file on your system that provides resolution information) to contain name server entries that are resolvable to corporate DNS servers. The total timeout period configured—a combination of options attempts (retries) and options timeout (exponential backoff)—should be less than 30 seconds. For example, where xxx.xxx.xxx.42 and xxx.xxx.xxx.15 are valid name server addresses in your network, provide an entry similar to the following in /etc/resolv.conf
:
options attempts: 2 options timeout: 1 search cluster01.example.com example.com nameserver xxx.xxx.xxx.42 nameserver xxx.xxx.xxx.15
/etc/nsswitch.conf
controls name service lookup order. In some system configurations, the Network Information System (NIS) can cause problems with Oracle SCAN address resolution. Oracle recommends that you place the nis
entry at the end of the search list. For example:
/etc/nsswitch.conf hosts: files dns nis
Note:
Be aware that use of NIS is a frequent source of problems when doing cable pull tests, as host name and username resolution can fail.If you use GNS, then you need to specify a static IP address for the GNS VIP address, and delegate a subdomain to be delegated to that static GNS IP address.
As nodes are added to the cluster, your organization's DHCP server can provide addresses for these nodes dynamically. These addresses are then registered automatically in GNS, and GNS provides resolution within the subdomain to cluster node addresses registered with GNS.
Because allocation and configuration of addresses is performed automatically with GNS, no further configuration is required. Oracle Clusterware provides dynamic network configuration as nodes are added to or removed from the cluster. The following example is provided only for information.
With a two node cluster where you have defined the GNS VIP, after installation you might have a configuration similar to the following for a two-node cluster, where the cluster name is mycluster
, the GNS parent domain is example.com
, the subdomain is grid.example.com
, 192.0.2 in the IP addresses represent the cluster public IP address network, and 192.168.0 represents the private IP address subnet:
Table 2-2 Grid Naming Service Example Network
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
GNS VIP |
None |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.1 |
Fixed by net administrator |
DNS |
Node 1 Public |
Node 1 |
|
|
Public |
192.0.2.101 |
Fixed |
GNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.104 |
DHCP |
GNS |
Node 1 Private |
Node 1 |
|
|
Private |
192.168.0.1 |
Fixed or DHCP |
GNS |
Node 2 Public |
Node 2 |
|
|
Public |
192.0.2.102 |
Fixed |
GNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.105 |
DHCP |
GNS |
Node 2 Private |
Node 2 |
|
|
Private |
192.168.0.2 |
Fixed or DHCP |
GNS |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.201 |
DHCP |
GNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.202 |
DHCP |
GNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
|
virtual |
192.0.2.203 |
DHCP |
GNS |
Footnote 1 Node host names may resolve to multiple addresses, including VIP addresses currently running on that host.
If you choose not to use GNS, then before installation you must configure public, virtual, and private IP addresses. Also, check that the default gateway can be accessed by a ping
command. To find the default gateway, use the route
command, as described in your operating system's help utility.
For example, with a two node cluster where each node has one public and one private interface, and you have defined a SCAN domain address to resolve on your DNS to one of three IP addresses, you might have the configuration shown in the following table for your network interfaces:
Table 2-3 Manual Network Configuration Example
Identity | Home Node | Host Node | Given Name | Type | Address | Address Assigned By | Resolved By |
---|---|---|---|---|---|---|---|
Node 1 Public |
Node 1 |
|
|
Public |
192.0.2.101 |
Fixed |
DNS |
Node 1 VIP |
Node 1 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.104 |
Fixed |
DNS and hosts file |
Node 1 Private |
Node 1 |
|
|
Private |
192.168.0.1 |
Fixed |
DNS and hosts file, or none |
Node 2 Public |
Node 2 |
|
|
Public |
192.0.2.102 |
Fixed |
DNS |
Node 2 VIP |
Node 2 |
Selected by Oracle Clusterware |
|
Virtual |
192.0.2.105 |
Fixed |
DNS and hosts file |
Node 2 Private |
Node 2 |
|
|
Private |
192.168.0.2 |
Fixed |
DNS and hosts file, or none |
SCAN VIP 1 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.201 |
Fixed |
DNS |
SCAN VIP 2 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.202 |
Fixed |
DNS |
SCAN VIP 3 |
none |
Selected by Oracle Clusterware |
mycluster-scan |
virtual |
192.0.2.203 |
Fixed |
DNS |
Footnote 1 Node host names may resolve to multiple addresses.
You do not need to provide a private name for the interconnect. If you want name resolution for the interconnect, then you can configure private IP names in the hosts file or the DNS. However, Oracle Clusterware assigns interconnect addresses on the interface defined during installation as the private interface (eth1
, for example), and to the subnet used for the private subnet.
The addresses to which the SCAN resolves are assigned by Oracle Clusterware, so they are not fixed to a particular node. To enable VIP failover, the configuration shown in the preceding table defines the SCAN addresses and the public and VIP addresses of both nodes on the same subnet, 192.0.2.
Note:
All host names must conform to the RFC 952 standard, which permits alphanumeric characters. Host names using underscores ("_") are not allowed.The precise configuration you choose for your network depends on the size and use of the cluster you want to configure, and the level of availability you require.
If certified Network-attached Storage (NAS) is used for Oracle RAC and this storage is connected through Ethernet-based networks, then you must have a third network interface for NAS I/O. Failing to provide three separate interfaces in this case can cause performance and stability problems under load.
To allow Oracle Clusterware to better tolerate network failures with NAS devices or NFS mounts, enable the Name Service Cache Daemon (nscd
). The nscd
provides a caching mechanism for the most common name service requests. It is automatically started when the system starts up in a multi-user state. Oracle software requires that the server is started with multiuser run level (3), which is the default for Oracle Solaris.
To check to see if the server is set to 3, enter the command who -r
. For example:
# who -r . run-level 3 Jan 4 14:04 3 0 S
Refer to your operating system documentation if you need to change the run level.
To check to see if the name service cache daemon is running, enter the following command:
# svcs svc:/sysstem/name-service-cache STATE STIME FMRI online Aug_28 svc:/system/name-service-cache:default
Alternatively, enter the command ps -aef |grep nscd
.
Use svcs svc
commands to check to see if Service Management Facility (SMF) is enabled on your system, and the multi-user and multi-user-server services are online:
# svcs svc:/milestone/multi-user STATE STIME FMRI online Aug_28 svc:/milestone/multi-user:default # svcs svc:/milestone/multi-user-server STATE STIME FMRI online Aug_28 svc:/milestone/multi-user-server:default
Depending on the products that you intend to install, verify that the following operating system software is installed on the system. Note that patch requirements are minimum required patch versions, and that earlier patch numbers are rolled into later patch updates.
Requirements listed here are current as of the initial release date. To obtain the most current information about kernel requirements, refer to the online version on the Oracle Technology Network (OTN) at the following URL:
http://www.oracle.com/technetwork/indexes/documentation/index.html
To check software requirements, requirements refer to Section 2.9, "Checking the Software Requirements."
OUI performs checks your system to verify that it meets the listed operating system package requirements. To ensure that these checks complete successfully, verify the requirements before you start OUI.
Note:
Oracle does not support running different operating system versions on cluster members, unless an operating system is being upgraded. You cannot run different operating system version binaries on members of the same cluster, even if each operating system is supported.The following is the list of supported Oracle Solaris platforms and requirements at the time of release:
Software Requirements List for Oracle Solaris (SPARC 64-Bit) Platforms
Software Requirements List for Oracle Solaris (x86 64-Bit) Platforms
Table 2-4 System Requirements for Oracle Solaris (SPARC 64-Bit)
Item | Requirement |
---|---|
Operating System, Packages and patches for Oracle Solaris 11 |
Oracle Solaris 11 (11/2011 SPARC) or later, for Oracle Grid Infrastructure release 11.2.0.3 or later. pkg://solaris/developer/build/make pkg://solaris/developer/assembler No special kernel parameters or patches are required at this time. |
Operating System, Packages and Patches for Oracle Solaris 10 |
Oracle Solaris 10 U6 (5.10-2008.10) SUNWarc SUNWbtool SUNWcsl SUNWhea SUNWi1cs (ISO8859-1) SUNWi15cs (ISO8859-15) SUNWi1of SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWxwfnt 119963-14: Sun OS 5.10: Shared Library Patch for C++ 120753-06: SunOS 5.10: Microtasking libraries (libmtsk) patch 139574-03: SunOS 5.10 141414-02 141414-09 (11.2.0.2 or later) 146808-01 (for Solaris 10 U9 or earlier) Note: You may also require additional font packages for Java, depending on your locale. Refer to the following website for more information:
|
Database Smart Flash Cache (An Enterprise Edition only feature.) |
The following patches are required for Oracle Solaris (SPARC 64-Bit) if you are using the flash cache feature: 125555-03 139555-08 140796-01 140899-01 141016-01 141414-10 141736-05 |
IPMI |
The following patches are required only if you plan to configure Failure Isolation using IPMI on SPARC systems: 137585-05 or later (IPMItool patch) 137594-02 or later (BMC driver patch) In addition, there may be additional patches required for your firmware. Review section Section 2.15, "Enabling Intelligent Platform Management Interface (IPMI)" for additional information. |
Oracle RAC |
Oracle Clusterware is required; Oracle Solaris Cluster is supported for use with Oracle RAC on SPARC. If you use Oracle Solaris Cluster 3.2, then you must install the following additional kernel packages and patches: SUNWscucm 3.2.0: 126106-40 VERSION=3.2.0,REV=2006.12.05.22.58 or later 125508-08 125514-05 125992-04 126047-11 126095-05 126106-33 Note: You do not require the additional packages if you are using Oracle Clusterware only, without Oracle Solaris Cluster. If you use a volume manager, then you may need to install additional kernel packages. |
Packages and patches for Oracle Solaris Cluster |
Note: You do not require Oracle Solaris Cluster to install Oracle Clusterware. For Oracle Solaris 11, Oracle Solaris Cluster 4.0 is the minimum supported Oracle Solaris Cluster version. For Oracle Solaris 10, Oracle Solaris Cluster 3.3 or later UDLM (optional):
CAUTION: If you install the For more information, refer to Section 2.8, "Oracle Solaris Cluster Configuration on SPARC Guidelines." For Oracle Solaris Cluster on SPARC, install UDLM onto each node in the cluster using the patch Oracle provides in the Grid_home |
Oracle Messaging Gateway |
Oracle Messaging Gateway supports the integration of Oracle Streams Advanced Queuing (AQ) with the following software: IBM MQSeries V6 (6.6.0), client and server Tibco Rendezvous 7.2 |
Pro*C/C++, |
Oracle Solaris Studio 12 (formerly Sun Studio) (C and C++ 5.9) 119963-14: SunOS 5.10: Shared library patch for C++ 124863-12 C++ SunOS 5.10 Compiler Common patch for Sun C C++ (optional) |
gcc 3.4.2 Open Database Connectivity (ODBC) packages are only needed if you plan on using ODBC. If you do not plan to use ODBC, then you do not need to install the ODBC RPMs for Oracle Clusterware, Oracle ASM, or Oracle RAC. |
|
Oracle Solaris Studio 12 (Fortran 95) Download at the following URL:
|
|
Oracle JDBC/OCI Drivers |
You can use the following optional JDK versions with the Oracle JDBC/OCI drivers, however they are not required for the installation:
Note: JDK 6 is the minimum level of JDK supported on Oracle Solaris 11. |
SSH |
Oracle Clusterware requires SSH. The required SSH software is the default SSH shipped with your operating system. |
Table 2-5 System Requirements for Oracle Solaris (x86 64-Bit)
Item | Requirement |
---|---|
Oracle Solaris 11 operating system and packages |
Oracle Solaris 11 (11/2011 X86) or later, for Oracle Grid Infrastructure release 11.2.0.3 or later. pkg://solaris/developer/build/make pkg://solaris/developer/assembler No special kernel parameters or patches are required at this time. |
Oracle Solaris 10 Packages and Patches |
Oracle Solaris 10 U6 (5.10-2008.10) or later SUNWarc SUNWbtool SUNWcsl SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot SUNWtoo SUNWi1of SUNWi1cs (ISO8859-1) SUNWi15cs (ISO8859-15) SUNWxwfnt 119961-05: SunOS 5.10_x86: Assembler 119964-14: SunOS 5.10_x86 Shared library patch for C++_x86 120754-06: SunOS 5.10_x86 libmtsk 137104-02 139575-03 139556-08 141415-04 Note: You may also require additional font packages for Java, depending on your locale. Refer to the following Web site for more information: http://java.sun.com/j2se/1.4.2/font-requirements.html">>http://java.sun.com/j2se/1.4.2/font-requirements.html |
Database Smart Flash Cache (An Enterprise Edition only feature.) |
The following patches are required for Oracle Solaris (x86 64-Bit) if you are using the flash cache feature: 139556-08 140797-01 140900-01 141017-01 141415-10 141737-05 |
IPMI |
There may be additional patches required for your firmware. Review section Section 2.15, "Enabling Intelligent Platform Management Interface (IPMI)" for additional information. |
Oracle Solaris Cluster |
Note: You do not require Oracle Solaris Cluster to install Oracle Clusterware. For Oracle Solaris 11, Oracle Solaris Cluster 4.0 is the minimum supported Solaris Cluster version. For Oracle Solaris 10, If you use Oracle Solaris Cluster, then you must install the following additional kernel packages and patches (or later updates): Oracle Solaris Cluster 3.2 Update 2 SUNWscucm 3.2.0: 126107-40 VERSION=3.2.0,REV=2006.12.05.21.06 125509-10 125515-05 125993-04 126048-11 126096-04 126096-05 126107-33 137104-02 If you use a volume manager, then you may need to install additional kernel packages. If you use Oracle Solaris Cluster 3.3 or 3.3.5/11, then please refer to the Oracle Solaris Cluster Documentation library. In particular, refer to Data Service for Oracle Real Application Clusters Guide. |
Oracle Messaging Gateway |
Oracle Messaging Gateway supports the integration of Oracle Streams Advanced Queuing (AQ) with the following software: IBM MQSeries V6, client and server |
Pro*C/C++, |
Oracle Solaris Studio 12 (formerly Sun Studio) September 2007 Release, with the following patches: Additional patches may be needed depending on applications you deploy. Download Oracle Solaris Studio from the following URL:
|
gcc 3.4.2 Open Database Connectivity (ODBC) packages are only needed if you plan on using ODBC. If you do not plan to use ODBC, then you do not need to install the ODBC RPMs for Oracle Clusterware, Oracle ASM, or Oracle RAC. |
|
|
|
Oracle JDBC/OCI Drivers |
You can use the following optional JDK versions with the Oracle JDBC/OCI drivers, however they are not required for the installation:
Note: JDK 6 is the minimum level of JDK supported on Oracle Solaris 11. |
SSH |
Oracle Clusterware requires SSH. The required SSH software is the default SSH shipped with your operating system. |
Review the following information if you are installing Oracle Grid Infrastructure on SPARC processor servers.
If you use Oracle Solaris Cluster 3.3 or 3.3.5/11, then refer to the Oracle Solaris Cluster Documentation library before starting Oracle Grid Infrastructure installation and Oracle RAC installation. In particular, refer to Oracle Solaris Cluster Data Service for Oracle Real Application Clusters Guide, which is available at the following URL:
http://download.oracle.com/docs/cd/E18728_01/html/821-2852/index.html
Review the following additional information for UDLM and native cluster membership interface:
With Oracle Solaris Cluster 3.3 and later, Oracle recommends that you do not use the UDLM. Instead, Oracle recommends that you use the native cluster membership interface functionality (native SKGXN), which is installed automatically with Oracle Solaris Cluster 3.3 if UDLM is not deployed. No additional packages are needed to use this interface.
If you choose to use the UDLM, then you must install the ORCLudlm package for the supported Oracle Solaris Cluster version for this release.
The native Oracle Solaris Cluster and the UDLM interfaces cannot co-exist in the same Oracle RAC cluster: Every node of the Oracle RAC cluster must either have ORCLudlm installed, or none of the nodes of the Oracle RAC cluster may have ORCLudlm installed.
With Oracle Solaris Containers, called zones, it is possible for one physical server to host multiple Oracle RAC clusters, each in an isolated container cluster. Those container clusters must each be self-consistent in terms of the membership model being used. However, because each container cluster is an isolated environment, you can use zone clusters to create a mix of ORCLudlm and native cluster membership interface Oracle RAC clusters on one physical system.
To ensure that the system meets these requirements, follow these steps:
To determine which version of Oracle Solaris is installed, enter the following command:
# uname -r 5.11
In this example, the version shown is Oracle Solaris 11 (5.11). If necessary, refer to your operating system documentation for information about upgrading the operating system.
To determine if the required packages are installed, enter a command similar to the following:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibC SUNWlibm SUNWlibms SUNWsprot \ SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs SUNWxwfnt SUNWcsl
If a package that is required for your system architecture is not installed, then install it. Refer to your operating system or software documentation for information about installing packages.
Note:
There may be more recent versions of packages listed installed on the system. If a listed patch is not installed, then determine if a more recent version is installed before installing the version listed.Use NDD to ensure that the Oracle Solaris kernel TCP/IP ephemeral port range is broad enough to provide enough ephemeral ports for the anticipated server workload. Ensure that the lower range is set to at least 9000 or higher, to avoid Well Known ports, and to avoid ports in the Registered Ports range commonly used by Oracle and other server ports. Set the port range high enough to avoid reserved ports for any applications you may intend to use. If the lower value of the range you have is greater than 9000, and the range is large enough for your anticipated workload, then you can ignore OUI warnings regarding the ephemeral port range.
Use the following command to check your current range for ephemeral ports:
# /usr/sbin/ndd /dev/tcp tcp_smallest_anon_port tcp_largest_anon_port 32768 65535
In the preceding example, tpc_smallest_anon_port
is set to the default range (32768-65535).
If necessary for your anticipated workload or number of servers, update the UDP and TCP ephemeral port range to a broader range. For example:
# /usr/sbin/ndd -set /dev/tcp tcp_smallest_anon_port 9000 # /usr/sbin/ndd -set /dev/tcp tcp_largest_anon_port 65500 # /usr/sbin/ndd -set /dev/udp udp_smallest_anon_port 9000 # /usr/sbin/ndd -set /dev/udp udp_largest_anon_port 65500
Oracle recommends that you make these settings permanent. Refer to your Oracle Solaris system administration documentation for information about how to automate this ephemeral port range alteration on system restarts.
On Solaris platforms, the /etc/pam.conf
file controls and limits resources for users on the system. On login, control and limit resources should be set for users on the system so that users are unable to perform denial of service attacks.
By default, PAM resource limits are not set for Solaris operating systems. To ensure that resource limits are honored, add the following line to the login service section of /etc/pam/conf:
login auth required pam_dial_auth.so.1
For example:
# login service (explicit because of pam_dial_auth) # login auth requisite pam_authtok_get.so.1 login auth required pam_dhkeys.so.1 login auth required pam_unix_cred.so.1 login auth required pam_unix_auth.so.1 login auth required pam_dial_auth.so.1 #
Note:
Your system may have more recent versions of the listed patches installed on it. If a listed patch is not installed, then determine if a more recent version is installed before installing the version listed.Select the table for your system architecture and verify that you have required patches.
To ensure that the system meets these requirements:
To determine whether an operating system patch is installed, and whether it is the correct version of the patch, enter a command similar to the following:
# /usr/sbin/patchadd -p | grep patch_number
For example, to determine if any version of the 119963 patch is installed, use the following command:
# /usr/sbin/patchadd -p | grep 119963
If an operating system patch is not installed, then download it from the following Web site and install it:
http://support.oracle.com
On x86-64 platforms running Oracle Solaris, if you install Oracle Solaris Cluster in addition to Oracle Clusterware, then complete the following task:
Switch user to root:
$ su - root
Complete one of the following steps, depending on the location of the installation
If the installation files are on a DVD, then enter a command similar to the following, where mountpoint
is the disk mount point directory or the path of the database directory on the DVD:
# mountpoint/clusterware/rootpre.sh
If the installation files are on the hard disk, change directory to the directory /Disk1
and enter the following command:
# ./rootpre.sh
Exit from the root account:
# exit
Repeat steps 1 through 3 on all nodes of the cluster.
Oracle Clusterware requires the same time zone setting on all cluster nodes. During installation, the installation process picks up the time zone setting of the Grid installation owner on the node where OUI runs, and uses that on all nodes as the default TZ setting for all processes managed by Oracle Clusterware. This default is used for databases, Oracle ASM, and any other managed processes.
You have two options for time synchronization: an operating system configured network time protocol (NTP), or Oracle Cluster Time Synchronization Service. Oracle Cluster Time Synchronization Service is designed for organizations whose cluster servers are unable to access NTP services. If you use NTP, then the Oracle Cluster Time Synchronization daemon (ctssd) starts up in observer mode. If you do not have NTP daemons, then ctssd
starts up in active mode and synchronizes time among cluster members without contacting an external time server.
On Oracle Solaris Cluster systems, Oracle Solaris Cluster software supplies a template file called ntp.cluster
(see /etc/inet/ntp.cluster
on an installed cluster host) that establishes a peer relationship between all cluster hosts. One host is designated as the preferred host. Hosts are identified by their private host names. Time synchronization occurs across the cluster interconnect. If Oracle Clusterware detects either that the Oracle Solaris Cluster NTP or an outside NTP server is set default NTP server in the system in the /etc/inet/ntp.conf
or the /etc/inet/ntp.conf.cluster
files, then CTSS is set to the observer mode.
Note:
Before starting the installation of the Oracle Grid Infrastructure, Oracle recommends that you ensure the clocks on all nodes are set to the same time.If you have NTP daemons on your server but you cannot configure them to synchronize time with a time server, and you want to use Cluster Time Synchronization Service to provide synchronization service in the cluster, then deactivate and deinstall the NTP.
To disable the NTP service, run the following command as the root
user
# /usr/sbin/svcadm disable ntp
When the installer finds that the NTP protocol is not active, the Cluster Time Synchronization Service is installed in active mode and synchronizes the time across the nodes. If NTP is found configured, then the Cluster Time Synchronization Service is started in observer mode, and no active time synchronization is performed by Oracle Clusterware within the cluster.
To confirm that ctssd
is active after installation, enter the following command as the Grid installation owner:
$ crsctl check ctss
If you are using NTP, and you prefer to continue using it instead of Cluster Time Synchronization Service, then you need to modify the NTP initialization file to enable slewing, which prevents time from being adjusted backward. Restart the network time protocol daemon after you complete this task.
To do this on Oracle Solaris without Oracle Sun Cluster, edit the /etc/inet/ntp.conf
file to add "slewalways yes"
and "disable pll"
to the file. After you make these changes, restart ntpd
(on Oracle Solaris 11) or xntpd
(on Oracle Solaris 10) using the command /usr/sbin/svcadm restart ntp
.
To do this on Oracle Solaris 11 with Oracle Solaris Sun Cluster 4.0, edit the /etc/inet/ntp.conf.sc
file to add "slewaways yes"
and "disablepll"
to the file. After you make these changes, restart ntpd
or xntpd
using the command /usr/sbin/svcadmn restart ntp
. To do this on Oracle Solaris 10 with Oracle Sun Cluster 3.2, edit the /etc/inet/ntp.conf.cluster
file.
To enable NTP after it has been disabled, enter the following command:
# /usr/sbin/svcadm enable ntp
Intelligent Platform Management Interface (IPMI) provides a set of common interfaces to computer hardware and firmware that system administrators can use to monitor system health and manage the system. With Oracle 11g release 2, Oracle Clusterware can integrate IPMI to provide failure isolation support and to ensure cluster integrity.
Oracle Clusterware does not currently support the native IPMI driver on Oracle Solaris, so OUI does not collect the administrator credentials, and CSS is unable to obtain the IP address. You must configure failure isolation manually by configuring the BMC with a static IP address before installation, and using crsctl
to store the IP address and IPMI credentials after installation.
This section contains the following topics:
See Also:
Oracle Clusterware Administration and Deployment Guide for information about how to configure IPMI after installationYou must have the following hardware and software configured to enable cluster nodes to be managed with IPMI:
Each cluster member node requires a Baseboard Management Controller (BMC) running firmware compatible with IPMI version 1.5 or greater, which supports IPMI over LANs, and configured for remote control using LAN.
The cluster requires a management network for IPMI. This can be a shared network, but Oracle recommends that you configure a dedicated network.
Each cluster member node's port used by BMC must be connected to the IPMI management network.
Each cluster member must be connected to the management network.
Some server platforms put their network interfaces into a power saving mode when they are powered off. In this case, they may operate only at a lower link speed (for example, 100 MB, instead of 1 GB). For these platforms, the network switch port to which the BMC is connected must be able to auto-negotiate down to the lower speed, or IPMI will not function properly.
Install and configure IPMI firmware patches as described in Section 2.15.1.1, "IPMI Firmware Patches."
Oracle has provided patch-level information for IPMI firmware on Sun systems. Obtain the patch version needed for your firmware from the following URL:
http://www.oracle.com/technetwork/systems/patches/firmware/index.html
Install on each cluster member node:
Sun Blade T6340 Server Module Sun System Firmware with LDOMS support
139448-03
SPARC Enterprise T5440 Sun System Firmware with LDOMS support
139446-03
Netra T5440 Sun System Firmware with LDOMS support
139445-04
SPARC Enterprise T5140 & T5240 Sun System Firmware LDOMS
139444-03
Netra T5220 Sun System Firmware with LDOMS support
139442-06
Sun Blade T6320 + T6320-G2 Server Module Sun System Firmware with LDOMS support
139440-04
SPARC Enterprise T5120 & T5220 Sun System Firmware with LDOMS support
139439-04
On Oracle Solaris platforms, the BMC shares configuration information with the Integrated Lights Out Manager service processor (ILOM). For Oracle Clusterware, you must configure the ILOM/BMC for static IP addresses. Configuring the BMC with dynamic addresses (DHCP) is not supported on Oracle Solaris.
Note:
If you configure IPMI, and you use Grid Naming Service (GNS) you still must configure separate addresses for the IPMI interfaces. As the IPMI adapter is not seen directly by the host, the IPMI adapter is not visible to GNS as an address on the host.On each node, complete the following steps to configure the BMC to support IPMI-based node fencing:
Enable IPMI over LAN, so that the BMC can be controlled over the management network.
Configure a static IP address for the BMC.
Establish an administrator user account and password for the BMC
Configure the BMC for VLAN tags, if you will use the BMC on a tagged VLAN.
The configuration tool you use does not matter, but these conditions must be met for the BMC to function properly.
Refer to the documentation for the configuration option you select for details about configuring the BMC.
Note:
Problems in the initial revisions of Oracle Solaris software and firmware prevented IPMI support from working properly. Ensure you have the latest firmware for your platform and the following Oracle Solaris patches (or later versions), available from the following URL:http://www.oracle.com/technetwork/systems/patches/firmware/index.html
137585-05 IPMItool patch
137594-02 BMC driver patch
When you log in to the ILOM web interface, configure parameters to enable IPMI using the following procedures:
Click Configuration, then System Management Access, then IPMI. Click Enabled to enable IPMI over LAN.
Click Configuration, then Network. Enter information for the IP address, the netmask, and the default gateway.
Click User Management, then User Account Settings. Add the IPMI administrator account username and password, and set the role to Administrator.
The utility ipmitool
is provided as part of the Oracle Solaris distribution. You can use ipmitool
to configure IPMI parameters, but be aware that setting parameters using ipmitool
also sets the corresponding parameters for the service processor.
The following is an example of configuring BMC using ipmitool
(version 1.8.6).
Log in as root
.
Verify that ipmitool
can communicate with the BMC using the IPMI driver by using the command bmc info
, and looking for a device ID in the output. For example:
# ipmitool bmc info Device ID : 32 . . .
If ipmitool
is not communicating with the BMC, then review the section "Configuring the BMC" and ensure that the IPMI driver is running.
Enable IPMI over LAN using the following procedure
Determine the channel number for the channel used for IPMI over LAN. Beginning with channel 1, run the following command until you find the channel that displays LAN attributes (for example, the IP address):
# ipmitool lan print 1 . . . IP Address Source : 0x01 IP Address : 140.87.155.89 . . .
Turn on LAN access for the channel found. For example, where the channel is 1:
# ipmitool -I bmc lan set 1 access on
Configure IP address settings for IPMI using the static IP addressing procedure:
Using static IP Addressing
If the BMC shares a network connection with ILOM, then the IP address must be on the same subnet. You must set not only the IP address, but also the proper values for netmask, and the default gateway. For example, assuming the channel is 1:
# ipmitool -I bmc lan set 1 ipaddr 192.168.0.55 # ipmitool -I bmc lan set 1 netmask 255.255.255.0 # ipmitool -I bmc lan set 1 defgw ipaddr 192.168.0.1
Note that the specified address (192.168.0.55
) will be associated only with the BMC, and will not respond to normal pings.
Establish an administration account with a username and password, using the following procedure (assuming the channel is 1):
Set BMC to require password authentication for ADMIN access over LAN. For example:
# ipmitool -I bmc lan set 1 auth ADMIN MD5,PASSWORD
List the account slots on the BMC, and identify an unused slot (a User ID with an empty user name field). For example:
# ipmitool channel getaccess 1 . . . User ID : 4 User Name : Fixed Name : No Access Available : call-in / callback Link Authentication : disabled IPMI Messaging : disabled Privilege Level : NO ACCESS . . .
Assign the desired administrator user name and password and enable messaging for the identified slot. (Note that for IPMI v1.5 the user name and password can be at most 16 characters). Also, set the privilege level for that slot when accessed over LAN (channel 1) to ADMIN (level 4). For example, where username
is the administrative user name, and password
is the password:
# ipmitool user set name 4 username # ipmitool user set password 4 password # ipmitool user enable 4 # ipmitool channel setaccess 1 4 privilege=4 # ipmitool channel setaccess 1 4 link=on # ipmitool channel setaccess 1 4 ipmi=on
Verify the setup using the command lan print 1. The output should appear similar to the following. Note that the items in bold text are the settings made in the preceding configuration steps, and comments or alternative options are indicated within brackets []:
# ipmitool lan print 1 Set in Progress : Set Complete Auth Type Support : NONE MD2 MD5 PASSWORD Auth Type Enable : Callback : MD2 MD5 : User : MD2 MD5 : Operator : MD2 MD5 : Admin : MD5 PASSWORD : OEM : MD2 MD5 IP Address Source : DHCP Address [or Static Address] IP Address : 192.168.0.55 Subnet Mask : 255.255.255.0 MAC Address : 00:14:22:23:fa:f9 SNMP Community String : public IP Header : TTL=0x40 Flags=0x40 Precedence=… Default Gateway IP : 192.168.0.1 Default Gateway MAC : 00:00:00:00:00:00 . . . # ipmitool channel getaccess 1 4 Maximum User IDs : 10 Enabled User IDs : 2 User ID : 4 User Name : username [This is the administration user] Fixed Name : No Access Available : call-in / callback Link Authentication : enabled IPMI Messaging : enabled Privilege Level : ADMINISTRATOR
Verify that the BMC is accessible and controllable from a remote node in your cluster using the bmc info command. For example, if node2-ipmi
is the network host name assigned the IP address of node2
's BMC, then to verify the BMC on node node2
from node1
, with the administrator account username
, enter the following command on node1
:
$ ipmitool -H node2-ipmi -U username lan print 1
You are prompted for a password. Provide the IPMI password.
If the BMC is correctly configured, then you should see information about the BMC on the remote node. If you see an error message, such as Error: Unable to establish LAN session
, then you must check the BMC configuration on the remote node.
Repeat this process for each cluster member node.
After installation, configure IPMI as described in Section 5.2.2, "Configuring IPMI-based Failure Isolation Using Crsctl."
To install Oracle software, Secure Shell (SSH) connectivity should be set up between all cluster member nodes. OUI uses the ssh
and scp
commands during installation to run remote commands on and copy files to the other cluster nodes. You must configure SSH so that these commands do not prompt for a password.
Note:
SSH is used by Oracle configuration assistants for configuration operations from local to remote nodes. It is also used by Oracle Enterprise Manager.You can configure SSH from the OUI interface during installation for the user account running the installation. The automatic configuration creates passwordless SSH connectivity between all cluster member nodes. Oracle recommends that you use the automatic procedure if possible.
To enable the script to run, you must remove stty
commands from the profiles of any Oracle software installation owners, and remove other security measures that are triggered during a login, and that generate messages to the terminal. These messages, mail checks, and other displays prevent Oracle software installation owners from using the SSH configuration script that is built into the Oracle Universal Installer. If they are not disabled, then SSH must be configured manually before an installation can be run.
See Also:
"Preventing Installation Errors Caused by Terminal Output Commands" for information about how to remove stty commands in user profilesBy default, OUI searches for SSH public keys in the directory /usr/local/etc/
, and ssh-keygen binaries in /usr/local/bin
. However, on Oracle Solaris, SSH public keys typically are located in the path /etc/ssh, and
ssh-keygen binaries are located in the path /usr/bin
. To ensure that OUI can set up SSH, use the following command to create soft links:
# ln -s /etc/ssh /usr/local/etc # ln -s /usr/bin /usr/local/bin
In rare cases, Oracle Clusterware installation may fail during the "AttachHome" operation when the remote node closes the SSH connection. To avoid this problem, set the following parameter in the SSH daemon configuration file /etc/ssh/sshd_config on all cluster nodes to set the timeout wait to unlimited:
LoginGraceTime 0
You run the installer software with the Oracle Grid Infrastructure installation owner user account (oracle
or grid
). However, before you start the installer, you must configure the environment of the installation owner user account. Also, create other required Oracle software owners, if needed.
This section contains the following topics:
Environment Requirements for Oracle Grid Infrastructure Software Owner
Procedure for Configuring Oracle Software Owner Environments
Preventing Installation Errors Caused by Terminal Output Commands
You must make the following changes to configure the Oracle Grid Infrastructure software owner environment:
Set the installation software owner user (grid
, oracle
) default file mode creation mask (umask
) to 022 in the shell startup file. Setting the mask to 022 ensures that the user performing the software installation creates files with 644 permissions.
Set ulimit settings for file descriptors and processes for the installation software owner (grid
, oracle
)
Set the software owner's environment variable DISPLAY environment variables in preparation for the Oracle Grid Infrastructure installation
To set the Oracle software owners' environments, follow these steps, for each software owner (grid
, oracle
):
Start a new terminal session; for example, start an X terminal (xterm
).
Enter the following command to ensure that X Window applications can display on this system:
$ xhost + hostname
The hostname is the name of the local host.
If you are not already logged in to the system where you want to install the software, then log in to that system as the software owner user.
If you are not logged in as the user, then switch to the software owner user you are configuring. For example, with the grid
user:
$ su - grid
To determine the default shell for the user, enter the following command:
$ echo $SHELL
Caution:
Use shell programs supported by your operating system vendor. If you use a shell program that is not supported by your operating system, then you can encounter errors during installation.Open the user's shell startup file in any text editor:
Bash shell (bash):
$ vi .bash_profile
Bourne shell (sh
) or Korn shell (ksh
):
$ vi .profile
C shell (csh
or tcsh
):
% vi .login
Enter or edit the following line, specifying a value of 022 for the default file mode creation mask:
umask 022
If the ORACLE_SID
, ORACLE_HOME
, or ORACLE_BASE
environment variables are set in the file, then remove these lines from the file.
Save the file, and exit from the text editor.
To run the shell startup script, enter one of the following commands:
Bash shell:
$ . ./.bash_profile
Bourne, Bash, or Korn shell:
$ . ./.profile
C shell:
% source ./.login
If you are not installing the software on the local system, then enter a command similar to the following to direct X applications to display on the local system:
Bourne, Bash, or Korn shell:
$ DISPLAY=local_host:0.0 ; export DISPLAY
C shell:
% setenv DISPLAY local_host:0.0
In this example, local_host
is the host name or IP address of the system (your workstation, or another client) on which you want to display the installer.
If you determined that the /tmp
directory has less than 1 GB of free space, then identify a file system with at least 1 GB of free space and set the TEMP
and TMPDIR
environment variables to specify a temporary directory on this file system:
Note:
You cannot use a shared file system as the location of the temporary file directory (typically/tmp
) for Oracle RAC installation. If you place /tmp
on a shared file system, then the installation fails.Use the df -h
command to identify a suitable file system with sufficient free space.
If necessary, enter commands similar to the following to create a temporary directory on the file system that you identified, and set the appropriate permissions on the directory:
$ su - root # mkdir /mount_point/tmp # chmod 775 /mount_point/tmp # exit
Enter commands similar to the following to set the TEMP and TMPDIR environment variables:
Bourne, Bash, or Korn shell:
$ TEMP=/mount_point/tmp $ TMPDIR=/mount_point/tmp $ export TEMP TMPDIR
C shell:
% setenv TEMP /mount_point/tmp % setenv TMPDIR /mount_point/tmp
To verify that the environment has been set correctly, enter the following commands:
$ umask $ env | more
Verify that the umask
command displays a value of 22
, 022
, or 0022
and that the environment variables you set in this section have the correct values.
Oracle recommends that you set shell limits and system configuration parameters as described in this section.
Note:
The shell limit values in this section are minimum values only. For production database systems, Oracle recommends that you tune these values to optimize the performance of the system. See your operating system documentation for more information about configuring shell limits.The ulimit
settings determine process memory related resource limits. Verify that the shell limits displayed in the following table are set to the values shown:
Shell Limit | Description | Soft Limit (KB) | Hard Limit (KB) |
---|---|---|---|
STACK |
Size of the stack segment of the process | at most 10240 | at most 32768 |
NOFILES |
Open file descriptors | at least 1024 | at least 65536 |
MAXUPRC or MAXPROC |
Maximum user processes | at least 2047 | at least 16384 |
To display the current value specified for these shell limits enter the following commands:
ulimit -s ulimit -n
You can also use the following command:
ulimit -a
In the above command, -a
lists all current resource limits.
If you are on a remote terminal, and the local node has only one visual (which is typical), then use the following syntax to set the DISPLAY environment variable:
Bourne, Korn, and Bash shells
$ export DISPLAY=hostname:0
C shell:
$ setenv DISPLAY hostname:0
For example, if you are using the Bash shell, and if your host name is node1
, then enter the following command:
$ export DISPLAY=node1:0
To ensure that X11 forwarding will not cause the installation to fail, create a user-level SSH client configuration file for the Oracle software owner user, as follows:
Using any text editor, edit or create the software installation owner's ~/.ssh/config
file.
Make sure that the ForwardX11 attribute is set to no
. For example:
Host * ForwardX11 no
During an Oracle Grid Infrastructure installation, OUI uses SSH to run commands and copy files to the other nodes. During the installation, hidden files on the system (for example, .bashrc
or .cshrc
) will cause makefile and other installation errors if they contain stty
commands.
To avoid this problem, you must modify these files in each Oracle installation owner user home directory to suppress all output on STDOUT
or STDERR
(for example, stty
, xtitle
, and other such commands) as in the following examples:
Bourne, Bash, or Korn shell:
if [ -t 0 ]; then stty intr ^C fi
C shell:
test -t 0 if ($status == 0) then stty intr ^C endif
Note:
When SSH is not available, the Installer uses thersh
and rcp
commands instead of ssh
and scp
.
If there are hidden files that contain stty
commands that are loaded by the remote shell, then OUI indicates an error and stops the installation.
During installation, you are prompted to provide a path to a home directory to store Oracle Grid Infrastructure software. Ensure that the directory path you provide meets the following requirements:
It should be created in a path outside existing Oracle homes, including Oracle Clusterware homes.
It should not be located in a user home directory.
It should be created either as a subdirectory in a path where all files can be owned by root
, or in a unique path.
If you create the path before installation, then it should be owned by the installation owner of Oracle Grid Infrastructure (typically oracle
for a single installation owner for all Oracle software, or grid
for role-based Oracle installation owners), and set to 775 permissions.
Oracle recommends that you install Oracle Grid Infrastructure on local homes, rather than using a shared home on shared storage.
For installations with Oracle Grid Infrastructure only, Oracle recommends that you create a path compliant with Oracle Optimal Flexible Architecture (OFA) guidelines, so that Oracle Universal Installer (OUI) can select that directory during installation. For OUI to recognize the path as an Oracle software path, it must be in the form u0[1-9]/app.
When OUI finds an OFA-compliant path, it creates the Oracle Grid Infrastructure and Oracle Inventory (oraInventory
) directories for you.
To create an Oracle Grid Infrastructure path manually, ensure that it is in a separate path, not under an existing Oracle base path. For example:
# mkdir -p /u01/app/11.2.0/grid # chown grid:oinstall /u01/app/11.2.0/grid # chmod -R 775 /u01/app/11.2.0/grid
With this path, if the installation owner is named grid, then by default OUI creates the following path for the Grid home:
/u01/app/11.2.0/grid
Create an Oracle base path for database installations, owned by the Oracle Database installation owner account. The OFA path for an Oracle base is /u01/app/
user
, where user
is the name of the Oracle software installation owner account. For example, use the following commands to create an Oracle base for the database installation owner account oracle
:
# mkdir -p /u01/app/oracle # chown -R oracle:oinstall /u01/app/oracle # chmod -R 775 /u01/app/oracle
Note:
If you choose to create an Oracle Grid Infrastructure home manually, then do not create the Oracle Grid Infrastructure home for a cluster under either the grid installation owner Oracle base or the Oracle Database installation owner Oracle base. Creating an Oracle Clusterware installation in an Oracle base directory will cause succeeding Oracle installations to fail.Oracle Grid Infrastructure homes can be placed in a local home on servers, even if your existing Oracle Clusterware home from a prior release is in a shared location.
Homes for Oracle Grid Infrastructure for a standalone server (Oracle Restart) can be under Oracle base. Refer to Oracle Database Installation Guide for your platform for more information about Oracle Restart.
The cluster name must be at least one character long and no more than 15 characters in length, must be alphanumeric, cannot begin with a numeral, and may contain hyphens (-).
In a Typical installation, the SCAN you provide is also the name of the cluster, so the SCAN name must meet the requirements for a cluster name. In an Advanced installation, The SCAN and cluster name are entered in separate fields during installation, so cluster name requirements do not apply to the SCAN name.