This chapter describes storage topics, such as Oracle Automatic Storage Management (Oracle ASM), in Oracle Real Application Clusters (Oracle RAC) environments.
This chapter includes the following topics:
All data files (including an undo tablespace for each instance) and redo log files (at least two for each instance) must reside in an Oracle ASM disk group, on a cluster file system, or on shared raw devices. In addition, Oracle recommends that you use one shared server parameter file (SPFILE) with instance-specific entries. Alternatively, you can use a local file system to store instance-specific parameter files (PFILEs).
Notes:
Database Configuration Assistant (DBCA) does not support shared raw devices for this release, nor does DBCA allow PFILES. However, you can use SQL commands to configure data files on shared raw devices.
If you are using the IBM General Parallel File System (GPFS), then you can use the same file system for all purposes, including using it for the Oracle home directory and for storing data files and logs. For optimal performance, you should use a large GPFS block size (typically, at least 512 KB). GPFS is designed for scalability, and there is no requirement to create multiple GPFS file systems as long as the amount of data fits in a single GPFS file system.
Unless otherwise noted, Oracle Database storage features such as Oracle ASM, Oracle Managed Files, automatic segment-space management, and so on, function the same in Oracle RAC environments as they do in noncluster Oracle database environments.
See Also:
For additional information about these storage features:If you do not use Oracle ASM, if your platform does not support a cluster file system, or if you do not want to use a cluster file system for database file storage, then create additional raw devices as described in your platform-specific Oracle RAC installation and configuration guide. However, Oracle recommends that you use Oracle ASM for database file storage, as described in "Oracle Automatic Storage Management with Oracle RAC".
Notes:
If you use raw devices, then you cannot use DBCA.
To create an Oracle RAC database using Oracle Database Standard Edition, you must use Oracle ASM for your database storage.
Optimal Flexible Architecture (OFA) ensures reliable installations and improves software manageability. This feature streamlines the way in which Oracle software installations are organized, thereby simplifying the on-going management of your installations and improves manageability by making default Oracle Database installs more compliant with OFA specifications.
During installation, you are prompted to specify an Oracle base (ORACLE_BASE
) location, which is owned by the user performing the installation. You can choose an existing ORACLE_BASE
, or choose another directory location that does not have the structure for an ORACLE_BASE
directory.
Using an Oracle base directory helps to facilitate the organization of Oracle installations, and helps to ensure that installations of multiple databases maintain an OFA configuration. During the installation, ORACLE_BASE
is the only required input, because the Oracle home uses a default value based on the value chosen for ORACLE_BASE
. In addition, Oracle recommends that you set the ORACLE_BASE
and ORACLE_HOME
environment variables when starting databases. Note that ORACLE_BASE
may become a required environment variable for database startup in a future release.
See Also:
Oracle Real Application Clusters Installation Guide installation guide for more information about specifying anORACLE_BASE
directoryAll Oracle RAC instances must be able to access all data files. If a data file must be recovered when the database is opened, then the first Oracle RAC instance to start is the instance that performs the recovery and verifies access to the file. As other instances start, they also verify their access to the data files. Similarly, when you add a tablespace or data file or bring a tablespace or data file online, all instances verify access to the file or files.
If you add a data file to a disk that other instances cannot access, then verification fails. Verification also fails if instances access different copies of the same data file. If verification fails for any instance, then diagnose and fix the problem. Then run the ALTER SYSTEM CHECK DATAFILES
statement on each instance to verify data file access.
If you increase the cardinality of a server pool in a policy-managed database, and a new server is allocated to the server pool, then Oracle starts an instance on the new server if you have Oracle Managed Files enabled. If the instance starts and there is no thread or redo log file available, then Oracle automatically enables a thread of redo and allocates the associated redo log files if the database uses Oracle ASM or any cluster file system.
You should create redo log groups only if you are using administrator-managed databases. For policy-managed databases, increase the cardinality and when the instance starts, if you are using Oracle Managed Files and Oracle ASM, then Oracle automatically allocates the thread, redo, and undo.
For administrator-managed databases, each instance has its own online redo log groups. Create these redo log groups and establish group members. To add a redo log group to a specific instance, specify the INSTANCE
clause in the ALTER DATABASE ADD LOGFILE
statement. If you do not specify the instance when adding the redo log group, then the redo log group is added to the instance to which you are currently connected.
See Also:
"About Designing and Deploying Oracle RAC Environments" for more information about administrator and policy management for databases
Oracle Database Administrator's Guide for information about creating redo log groups and establishing group members
Oracle Database SQL Language Reference for information about the ALTER DATABASE ADD LOGFILE
SQL statement
Each instance must have at least two groups of redo log files. You must allocate the redo log groups before enabling a new instance with the ALTER DATABASE ENABLE INSTANCE
instance_name
command. When the current group fills, an instance begins writing to the next log file group. If your database is in ARCHIVELOG
mode, then each instance must save filled online log groups as archived redo log files that are tracked in the control file.
During database recovery, all enabled instances are checked to see if recovery is needed. If you remove an instance from your Oracle RAC database, then you should disable the instance's thread of redo so that Oracle does not have to check the thread during database recovery.
Oracle Database automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, instances can always read all undo blocks throughout the cluster environment for consistent read purposes. Also, any instance can update any undo tablespace during transaction recovery, if that undo tablespace is not currently used by another instance for undo generation or transaction recovery.
You assign undo tablespaces in your Oracle RAC administrator-managed database by specifying a different value for the UNDO_TABLESPACE
parameter for each instance in your SPFILE or individual PFILEs. For policy-managed databases, Oracle automatically allocates the undo tablespace when the instance starts if you have Oracle Managed Files enabled. You cannot simultaneously use automatic undo management and manual undo management in an Oracle RAC database. In other words, all instances of an Oracle RAC database must operate in the same undo mode.
See Also:
"Setting SPFILE Parameter Values for Oracle RAC" for information about modifying SPFILE parameters
Oracle Database Administrator's Guide for detailed information about creating and managing undo tablespaces
Oracle ASM automatically maximizes I/O performance by managing the storage configuration across the disks that Oracle ASM manages. Oracle ASM does this by evenly distributing the database files across all of the available storage assigned to the disk groups within Oracle ASM in your cluster database environment. Oracle ASM partitions your total disk space requirements into uniformly sized units across all disks in a disk group. Oracle ASM can also automatically mirror data to prevent data loss. Because of these features, Oracle ASM also significantly reduces your administrative overhead.
Oracle ASM instances are created on each node where you install Oracle Clusterware. Each Oracle ASM instance has either an SPFILE or PFILE type parameter file. Oracle recommends that you back up the parameter files and the TNS entries for nondefault Oracle Net listeners.
To use Oracle ASM with Oracle RAC, select Oracle ASM as your storage option when you create your database with the Database Configuration Assistant (DBCA). As in noncluster Oracle databases, using Oracle ASM with Oracle RAC does not require I/O tuning.
The following topics describe Oracle ASM and Oracle ASM administration, as follows:
Configuring Preferred Mirror Read Disks in Extended Distance Clusters
Administering Oracle ASM Instances with SRVCTL in Oracle RAC
See Also:
Oracle Automatic Storage Management Administrator's Guide for complete information about managing Oracle ASMYou can create Oracle ASM disk groups and configure mirroring for Oracle ASM disk groups using the Oracle ASM configuration assistant (ASMCA). After your Oracle RAC database is operational, you can administer Oracle ASM disk groups with Oracle Enterprise Manager.
The Oracle tools that you use to manage Oracle ASM, including ASMCA, Oracle Enterprise Manager, and the silent mode install and upgrade commands, include options to manage Oracle ASM instances and disk groups.
You can use the Cluster Verification Utility (CVU) to verify the integrity of Oracle ASM across the cluster. Typically, this check ensures that the Oracle ASM instances on all nodes run from the same Oracle home and, if asmlib
exists, that it is a valid version and has valid ownership. Run the following command to perform this check:
cluvfy comp asm [-n node_list] [-verbose]
Replace node_list
with a comma-delimited list of node names on which the check is to be performed. Specify all
to check all nodes in the cluster.
Use the cluvfy comp ssa
command to locate shared storage.
See Also:
Oracle Clusterware Administration and Deployment Guide for more information about CVUWhen you create a disk group for a cluster or add new disks to an existing clustered disk group, prepare the underlying physical storage on shared disks and give the Oracle user permission to read and write to the disk. The shared disk requirement is the only substantial difference between using Oracle ASM with an Oracle RAC database compared to using it with a noncluster Oracle database. Oracle ASM automatically re-balances the storage load after you add or delete a disk or disk group.
In a cluster, each Oracle ASM instance manages its node's metadata updates to the disk groups. In addition, each Oracle ASM instance coordinates disk group metadata with other nodes in the cluster. As with noncluster Oracle databases, you can use Oracle Enterprise Manager, ASMCA, SQL*Plus, and the Server Control Utility (SRVCTL) to administer disk groups for Oracle ASM that are used by Oracle RAC. The Oracle Automatic Storage Management Administrator's Guide explains how to use SQL*Plus to administer Oracle ASM instances. Subsequent sections describe how to use the other tools.
Note:
When you start ASMCA, if there is not an Oracle ASM instance, then the utility prompts you to create one.To use Oracle ASM, you must first create disk groups with ASMCA before creating a database with DBCA. You can also use the Oracle ASM disk group management feature to create and manage an Oracle ASM instance and its associated disk groups independently of creating a database. You can use Oracle Enterprise Manager or DBCA to add disks to a disk group, to mount a disk group or to mount all of the disk groups, or to create Oracle ASM instances. Additionally, you can use Oracle Enterprise Manager to dismount and drop disk groups or to delete Oracle ASM instances.
Oracle ASM instances are created when you install Oracle Clusterware. To create an Oracle ASM disk group, run ASMCA from the Grid_home
/bin
directory. You can also use the Oracle ASM Disk Groups page in ASMCA for Oracle ASM management. That is, you can configure Oracle ASM storage separately from database creation. For example, from the ASM Disk Groups page, you can create disk groups, add disks to existing disk groups, or mount disk groups that are not currently mounted.
See Also:
Oracle Automatic Storage Management Administrator's Guide for information about managing Oracle ASMWhen you start ASMCA, if the Oracle ASM instance has not been created, then ASMCA prompts you to create the instance. ASMCA prompts you for the sysasm
password and the ASMSNMP
password.
When you configure Oracle ASM failure groups, it may be more efficient for a node to read from an extent that is closest to the node, even if that extent is a secondary extent. You can configure Oracle ASM to read from a secondary extent if that extent is closer to the node instead of Oracle ASM reading from the primary copy which might be farther from the node. Using preferred read failure groups is most beneficial in an extended distance cluster.
To configure this feature, set the ASM_PREFERRED_READ_FAILURE_GROUPS
initialization parameter to specify a list of failure group names as preferred read disks. Oracle recommends that you configure at least one mirrored extent copy from a disk that is local to a node in an extended cluster. However, a failure group that is preferred for one instance might be remote to another instance in the same Oracle RAC database. The parameter setting for preferred read failure groups is instance specific.
See Also:
Oracle Automatic Storage Management Administrator's Guide for complete information about configuring preferred mirror read disks in extended distance clusters
Oracle Database Reference for information about the ASM_PREFERRED_READ_FAILURE_GROUPS
initialization parameter
When installing Oracle Grid Infrastructure, any nonclustered Oracle ASM instances are automatically converted to clustered Oracle ASM.
See Also:
Oracle Database 2 Day + Real Application Clusters Guide for information about using Oracle Enterprise Manager Grid Control to convert nonclustered Oracle ASM to clustered Oracle ASM
Oracle Automatic Storage Management Administrator's Guide for complete information about configuring preferred mirror read disks in extended distance clusters
Oracle Real Application Clusters Installation Guide for your platform for detailed information about converting Oracle ASM using the rconfig
command
You can use the Server Control Utility (SRVCTL) to add or remove an Oracle ASM instance. To issue SRVCTL commands to manage Oracle ASM, log in as the operating system user that owns the Oracle Grid Infrastructure home and issue the SRVCTL commands from the bin directory of the Oracle Grid Infrastructure home.
Use the following syntax to add an Oracle ASM instance:
srvctl add asm
Use the following syntax to remove an Oracle ASM instance:
srvctl remove asm [-f]
You can also use SRVCTL to start, stop, and obtain the status of an Oracle ASM instance as in the following examples.
Use the following syntax to start an Oracle ASM instance:
srvctl start asm [-n node_name] [-o start_options]
Use the following syntax to stop an Oracle ASM instance:
srvctl stop asm [-n node_name] [-o stop_options]
Use the following syntax to show the configuration of an Oracle ASM instance:
srvctl config asm -n node_name
Use the following syntax to display the state of an Oracle ASM instance:
srvctl status asm [-n node_name]
See Also:
Appendix A, "Server Control Utility Reference" for more SRVCTL commands you can use to administer Oracle ASM
Oracle Automatic Storage Management Administrator's Guide for more information about administering Oracle ASM instances