This chapter describes the storage configuration tasks that you must complete before you start the installer to install Oracle Clusterware and Oracle Automatic Storage Management (Oracle ASM), and that you must complete before adding an Oracle Real Application Clusters (Oracle RAC) installation to the cluster.
This chapter contains the following topics:
This section describes supported options for storing Oracle Grid Infrastructure for a cluster storage options. It contains the following sections:
Overview of Oracle Clusterware and Oracle RAC Storage Options
General Storage Considerations for Oracle Grid Infrastructure and Oracle RAC
Using Logical Volume Managers with Oracle Grid Infrastructure and Oracle RAC
See Also:
The Oracle Certification site on My Oracle Support for the most current information about certified storage options:https://support.oracle.com
There are two ways of storing Oracle Clusterware files:
Oracle Automatic Storage Management (Oracle ASM): You can install Oracle Clusterware files (Oracle Cluster Registry and voting disks) in Oracle ASM disk groups.
Oracle ASM is the required database storage option for Typical installations, and for Standard Edition Oracle RAC installations. It is an integrated, high-performance database file system and disk manager for Oracle Clusterware and Oracle Database files. It performs striping and mirroring of database files automatically.
Only one Oracle ASM instance is permitted for each node regardless of the number of database instances on the node.
A supported shared file system: Supported file systems include the following:
A supported cluster file system, such as a Sun QFS shared file system. Note that if you intend to use a cluster file system for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Clusterware.
See Also:
The Certify page on My Oracle Support for supported cluster file systemsNetwork File System (NFS): Note that if you intend to use NFS for your data files, then you should create partitions large enough for the database files when you create partitions for Oracle Grid Infrastructure. NFS mounts differ for software binaries, Oracle Clusterware files, and database files.
Note:
You can no longer use OUI to install Oracle Clusterware or Oracle Database files on block or raw devices.See Also:
My Oracle Support for supported file systems and NFS or NAS filersOracle Automatic Storage Management Cluster File System (Oracle ACFS) provides a general purpose file system. You can place Oracle Database binaries on this system, but you cannot place Oracle data files or Oracle Clusterware files on Oracle ACFS.
Note the following about Oracle ACFS:
Oracle Restart does not support root-based Oracle Clusterware resources. For this reason, the following restrictions apply if you run Oracle ACFS on an Oracle Restart Configuration
You must manually load and unload Oracle ACFS drivers.
You must manually mount and unmount Oracle ACFS file systems, after the Oracle ASM instance is running
You can place Oracle ACFS database home file systems into the Oracle ACFS mount registry, along with other registered Oracle ACFS file systems.
You cannot put Oracle Clusterware binaries and files on Oracle ACFS.
Oracle ACFS provides a general purpose file system for other files.
For all installations, you must choose the storage option to use for Oracle Grid Infrastructure (Oracle Clusterware and Oracle ASM), and Oracle RAC databases. To enable automated backups during the installation, you must also choose the storage option to use for recovery files (the Fast Recovery Area). You do not have to use the same storage option for each file type.
Oracle Clusterware voting disks are used to monitor cluster node status, and Oracle Cluster Registry (OCR) files contain configuration information about the cluster. You can place voting disks and OCR files either in an Oracle ASM disk group, or on a cluster file system or shared network file system. Storage must be shared; any node that does not have access to an absolute majority of voting disks (more than half) will be restarted.
Use the following guidelines when choosing the storage options to use for each file type:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
Oracle recommends that you choose Oracle ASM as the storage option for database and recovery files.
For Standard Edition Oracle RAC installations, Oracle ASM is the only supported storage option for database or recovery files.
If you intend to use Oracle ASM with Oracle RAC, and you are configuring a new Oracle ASM instance, then your system must meet the following conditions:
All nodes on the cluster have Oracle Clusterware and Oracle ASM 11g release 2 (11.2) installed as part of an Oracle Grid Infrastructure for a cluster installation.
Any existing Oracle ASM instance on any node in the cluster is shut down.
Raw or block devices are supported only when upgrading an existing installation using the partitions already configured. On new installations, using raw or block device partitions is not supported by Oracle Automatic Storage Management Configuration Assistant (ASMCA) or Oracle Universal Installer (OUI), but is supported by the software if you perform manual configuration.
See Also:
Oracle Database Upgrade Guide for information about how to prepare for upgrading an existing databaseIf you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk areas to provide voting disk redundancy.
During Oracle Grid Infrastructure installation, you can create one disk group. After the Oracle Grid Infrastructure installation, you can create additional disk groups using ASMCA, SQL*Plus, or ASMCMD. Note that with Oracle Database 11g release 2 (11.2) and later releases, Oracle Database Configuration Assistant (DBCA) does not have the functionality to create disk groups for Oracle ASM.
If you install Oracle Database or Oracle RAC after you install Oracle Grid Infrastructure, then you can either use the same disk group for database files, OCR, and voting disk files, or you can use different disk groups. If you create multiple disk groups before installing Oracle RAC or before creating a database, then you can decide to do one of the following:
Place the data files in the same disk group as the Oracle Clusterware files.
Use the same Oracle ASM disk group for data files and recovery files.
Use different disk groups for each file type.
If you create only one disk group for storage, then the OCR and voting disk files, database files, and recovery files are contained in the one disk group. If you create multiple disk groups for storage, then you can choose to place files in different disk groups.
Note:
The Oracle ASM instance that manages the existing disk group should be running in the Grid home.See Also:
Oracle Database Storage Administrator's Guide for information about creating disk groups
Oracle Grid Infrastructure and Oracle RAC only support cluster-aware volume managers. This means, the volume manager that you want to use comes with a certain vendor cluster solution. To confirm that a volume manager you want to use is supported, look under the Certifications tab on My Oracle Support whether the associated cluster solution is certified for Oracle RAC. My Oracle Support is available at the following URL:
https://support.oracle.com
The following table shows the storage options supported for storing Oracle Clusterware and Oracle RAC files.
Table 3-1 Supported Storage Options for Oracle Clusterware and Oracle RAC
Storage Option | OCR and Voting Disks | Oracle Clusterware binaries | Oracle RAC binaries | Oracle Database Files | Oracle Recovery Files |
---|---|---|---|---|---|
Oracle Automatic Storage Management (Oracle ASM) Note: Loopback devices are not supported for use with Oracle ASM |
Yes |
No |
No |
Yes |
Yes |
Oracle Automatic Storage Management Cluster File System (Oracle ACFS) |
No |
No |
Yes |
No |
No |
Local file system |
No |
Yes |
Yes |
No |
No |
NFS file system on a certified NAS filer Note: Direct NFS Client does not support Oracle Clusterware files. |
Yes |
Yes |
Yes |
Yes |
Yes |
Shared disk partitions (block devices or raw devices) |
Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation. |
No |
No |
Not supported by OUI or ASMCA, but supported by the software. They can be added or removed after installation. |
No |
Use the following guidelines when choosing storage options:
You can choose any combination of the supported storage options for each file type provided that you satisfy all requirements listed for the chosen storage options.
You can use Oracle ASM 11g release 2 (11.2) and later to store Oracle Clusterware files. You cannot use prior Oracle ASM releases to do this.
If you do not have a storage option that provides external file redundancy, then you must configure at least three voting disk locations and at least three Oracle Cluster Registry locations to provide redundancy.
When you have determined your disk storage options, configure shared storage:
To use a file system, refer to "Shared File System Storage Configuration".
To use Oracle Automatic Storage Management, refer to "Oracle Automatic Storage Management Storage Configuration".
The installer does not suggest a default location for the Oracle Cluster Registry (OCR) or the Oracle Clusterware voting disk. If you choose to create these files on a file system, then review the following sections to complete storage requirements for Oracle Clusterware files:
Deciding to Use a Cluster File System for Oracle Clusterware Files
Checking NFS Mount and Buffer Size Parameters for Oracle Clusterware
Checking NFS Mount and Buffer Size Parameters for Oracle RAC
Enabling Direct NFS Client Oracle Disk Manager Control of NFS
Creating Directories for Oracle Clusterware Files on Shared File Systems
Creating Directories for Oracle Database Files on Shared File Systems
Disabling Direct NFS Client Oracle Disk Management Control of NFS
To use a shared file system for Oracle Clusterware, Oracle ASM, and Oracle RAC, the file system must comply with the following requirements:
To use a cluster file system, it must be a supported cluster file system. Refer to My Oracle Support (https://support.oracle.com
) for a list of supported cluster file systems.
To use an NFS file system, it must be on a supported NAS device. Log in to the following URL, and click the Certification tab to find the most current information about supported NAS devices:
If you choose to place your Oracle Cluster Registry (OCR) files on a shared file system, then Oracle recommends that one of the following is true:
The disks used for the file system are on a highly available storage device, (for example, a RAID device).
At least two file systems are mounted, and use the features of Oracle Clusterware 11g release 2 (11.2) to provide redundancy for the OCR.
If you choose to place your database files on a shared file system, then one of the following should be true:
The user account with which you perform the installation (oracle
or grid
) must have write permissions to create the files in the path that you specify.
Note:
Upgrading from Oracle9i release 2 using the raw device or shared file for the OCR that you used for the SRVM configuration repository is not supported.If you are upgrading Oracle Clusterware, and your existing cluster uses 100 MB OCR and 20 MB voting disk partitions, then you must extend these partitions to at least 300 MB. Oracle recommends that you do not use partitions, but instead place OCR and voting disks in disk groups marked as QUORUM disk groups.
All storage products must be supported by both your server and storage vendors.
Use Table 3-2 and Table 3-3 to determine the minimum size for shared file systems:
Table 3-2 Oracle Clusterware Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Voting disks with external redundancy |
3 |
At least 300 MB for each voting disk volume. |
Oracle Cluster Registry (OCR) with external redundancy |
1 |
At least 300 MB for each OCR volume |
Oracle Clusterware files (OCR and voting disks) with redundancy provided by Oracle software. |
1 |
At least 300 MB for each OCR volume At least 300 MB for each voting disk volume |
Table 3-3 Oracle RAC Shared File System Volume Size Requirements
File Types Stored | Number of Volumes | Volume Size |
---|---|---|
Oracle Database files |
1 |
At least 1.5 GB for each volume |
Recovery files Note: Recovery files must be on a different volume than database files |
1 |
At least 2 GB for each volume |
In Table 3-2 and Table 3-3, the total required volume size is cumulative. For example, to store all Oracle Clusterware files on the shared file system with normal redundancy, you should have at least 2 GB of storage available over a minimum of three volumes (three separate volume locations for the OCR and two OCR mirrors, and one voting disk on each volume). You should have a minimum of three physical disks, each at least 500 MB, to ensure that voting disks and OCR files are on separate physical disks. If you add Oracle RAC using one volume for database files and one volume for recovery files, then you should have at least 3.5 GB available storage over two volumes, and at least 5.5 GB available total for all volumes.
The User Data Protocol (UDP) parameter settings define the amount of send and receive buffer space for sending and receiving datagrams over an IP network. These settings affect cluster interconnect transmissions. If the buffers set by these parameters are too small, then incoming UDP datagrams can be dropped due to insufficient space, which requires send-side retransmission. This can result in poor cluster performance.
On Oracle Solaris 10, the UDP parameters are udp_recv_hiwat
and udp_xmit_hiwat
. The default values for these parameters are 57344 bytes. Oracle recommends that you set these parameters to at least 65536 bytes.
To check current settings for udp_recv_hiwat
and udp_xmit_hiwat
, enter the following commands:
# ndd /dev/udp udp_xmit_hiwat # ndd /dev/udp udp_recv_hiwat
To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ndd -set /dev/udp udp_xmit_hiwat 65536 # ndd -set /dev/udp udp_recv_hiwat 65536
To set the UDP values for when the system restarts, the ndd
commands have to be included in a system startup script. For example, The following script in /etc/rc2.d/S99ndd
sets the parameters:
ndd -set /dev/udp udp_xmit_hiwat 65536 ndd -set /dev/udp udp_recv_hiwat 65536
On Oracle Solaris 11, the UDP parameters are send_buf
and recv_buf
.
To check current settings for send_buf
and recv_buf
, enter the following commands:
# ipadm show-prop -p send_buf udp # ipadm show-prop -p recv_buf udp
To set the values of these parameters to 65536 bytes in current memory, enter the following commands:
# ipadm set-prop -p send_buf=65536 udp # ipadm set-prop -p recv_buf=65536 udp
See Also:
"Overview of Tuning IP Suite Parameters" in Oracle Solaris Tunable Parameters Reference Manual, available at the following URL:http://download.oracle.com/docs/cd/E19082-01/819-2724/chapter4-2/index.html
For new installations, Oracle recommends that you use Oracle Automatic Storage Management (Oracle ASM) to store voting disk and OCR files.
See Also:
The Certification page on My Oracle Support:Direct NFS Client is an alternative to using kernel-managed NFS. This section contains the following information about Direct NFS Client:
With Oracle Database 11g release 2 (11.2), instead of using the operating system kernel NFS client, you can configure Oracle Database to access NFS V3 servers directly using an Oracle internal Direct NFS Client.
To enable Oracle Database to use Direct NFS Client, the NFS file systems must be mounted and available over regular NFS mounts before you start installation. Direct NFS Client manages settings after installation. You should still set the kernel mount options as a backup, but for normal operation, Direct NFS Client will manage NFS mounts.
Refer to your vendor documentation to complete NFS configuration and mounting.
Some NFS file servers require NFS clients to connect using reserved ports. If your filer is running with reserved port checking, then you must disable it for Direct NFS Client to operate. To disable reserved port checking, consult your NFS file server documentation.
For NFS servers that restrict port range, you can use the insecure
option to enable clients other than root
to connect to the NFS server. Alternatively, you can disable Direct NFS Client as described in Section 3.2.12, "Disabling Direct NFS Client Oracle Disk Management Control of NFS".
Note:
Use NFS servers certified for Oracle RAC. Refer to the following URL for certification information:If you use Direct NFS Client, then you can choose to use a new file specific for Oracle data file management, oranfstab
, to specify additional options specific for Oracle Database to Direct NFS Client. For example, you can use oranfstab
to specify additional paths for a mount point. You can add the oranfstab
file either to /etc
or to $ORACLE_HOME/dbs
.
With shared Oracle homes, when the oranfstab
file is placed in $ORACLE_HOME/dbs
, the entries in the file are specific to a single database. In this case, all nodes running an Oracle RAC database use the same $ORACLE_HOME/dbs/oranfstab
file. In non-shared Oracle RAC installs, oranfstab
must be replicated on all nodes.
When the oranfstab
file is placed in /etc
, then it is globally available to all Oracle databases, and can contain mount points used by all Oracle databases running on nodes in the cluster, including standalone databases. However, on Oracle RAC systems, if the oranfstab
file is placed in /etc
, then you must replicate the file /etc/oranfstab
file on all nodes, and keep each /etc/oranfstab
file synchronized on all nodes, just as you must with the /etc/fstab
file.
See Also:
Section 3.2.4.3, "Mounting NFS Storage Devices with Direct NFS Client" for information about configuring/etc/fstab
In all cases, mount points must be mounted by the kernel NFS system, even when they are being served using Direct NFS Client.
Direct NFS Client determines mount point settings to NFS storage devices based on the configurations in /etc/mnttab
, which are changed with configuring the /etc/fstab
file.
Direct NFS Client searches for mount entries in the following order:
$ORACLE_HOME/dbs/oranfstab
/var/opt/oracle/oranfstab
/etc/mnttab
Direct NFS Client uses the first matching entry found.
Oracle Database is not shipped with Direct NFS Client enabled by default. To enable Direct NFS Client, complete the following steps:
Change the directory to $ORACLE_HOME/rdbms/lib
.
Enter the following command:
make -f ins_rdbms.mk dnfs_on
Note:
You can have only one active Direct NFS Client implementation for each instance. Using Direct NFS Client on an instance will prevent another Direct NFS Client implementation.If Oracle Database uses Direct NFS Client mount points configured using oranfstab
, then it first verifies kernel NFS mounts by cross-checking entries in oranfstab
with operating system NFS mount points. If a mismatch exists, then Direct NFS Client logs an informational message, and does not operate.
If Oracle Database cannot open an NFS server using Direct NFS Client, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up as defined in Section 3.2.8, "Checking NFS Mount and Buffer Size Parameters for Oracle RAC." Additionally, an informational message is logged into the Oracle alert and trace files indicating that Direct NFS Client could not connect to an NFS server.
Section 3.1.6, "Supported Storage Options" lists the file types that are supported by Direct NFS Client.
The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client.
See Also:
Oracle Database Administrator's Guide for guidelines to follow regarding managing Oracle database data files created with Direct NFS Client or kernel NFSDirect NFS Client can use up to four network paths defined in the oranfstab
file for an NFS server. Direct NFS Client performs load balancing across all specified paths. If a specified path fails, then Direct NFS Client reissues I/O commands over any remaining paths.
Use the following SQL*Plus views for managing Direct NFS Client in a cluster environment:
gv$dnfs_servers: Shows a table of servers accessed using Direct NFS Client.
gv$dnfs_files: Shows a table of files currently open using Direct NFS Client.
gv$dnfs_channels: Shows a table of open network paths (or channels) to servers for which Direct NFS Client is providing files.
gv$dnfs_stats: Shows a table of performance statistics for Direct NFS Client.
Note:
Usev$
views for single instances, and gv$
views for Oracle Clusterware and Oracle RAC storage.Network-attached storage (NAS) systems use NFS to access data. You can store data files on a supported NFS system.
NFS file systems must be mounted and available over NFS mounts before you start installation. Refer to your vendor documentation to complete NFS configuration and mounting.
Be aware that the performance of Oracle software and databases stored on NAS devices depends on the performance of the network connection between the Oracle server and the NAS device.
For this reason, Oracle recommends that you connect the server to the NAS device using a private dedicated network connection, which should be Gigabit Ethernet or better.
If you are using NFS for the Grid home or Oracle RAC home, then you must set up the NFS mounts on the storage so that they allow root
on the clients mounting to the storage to be considered root
instead of being mapped to an anonymous user, and allow root
on the client server to create files on the NFS filesystem that are owned by root
.
On NFS, you can obtain root
access for clients writing to the storage by enabling no_root_squash
on the server side. For example, to set up Oracle Clusterware file storage in the path /vol/grid
, with nodes node1, node 2, and node3 in the domain mycluster.example.com
, add a line similar to the following to the /etc/exports
file:
/vol/grid/ node1.mycluster.example.com(rw,no_root_squash) node2.mycluster.example.com(rw,no_root_squash) node3.mycluster.example.com (rw,no_root_squash)
If the domain or DNS is secure so that no unauthorized system can obtain an IP address on it, then you can grant root
access by domain, rather than specifying particular cluster member nodes:
For example:
/vol/grid/ *.mycluster.example.com(rw,no_root_squash)
Oracle recommends that you use a secure DNS or domain, and grant root
access to cluster member nodes using the domain, as using this syntax allows you to add or remove nodes without the need to reconfigure the NFS server.
If you use Grid Naming Service (GNS), then the subdomain allocated for resolution by GNS within the cluster is a secure domain. Any server without a correctly signed Grid Plug and Play (GPnP) profile cannot join the cluster, so an unauthorized system cannot obtain or use names inside the GNS subdomain.
Caution:
Grantingroot
access by domain can be used to obtain unauthorized access to systems. System administrators should refer to their operating system documentation for the risks associated with using no_root_squash
.After changing /etc/exports
, reload the file system mount using the following command:
# /usr/sbin/exportfs -avr
On the cluster member nodes, you must set the values for the NFS buffer size parameters rsize
and wsize
to 32768.
The NFS client-side mount options for Oracle Grid Infrastructure binaries are:
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid 0 0
If you have Oracle Grid Infrastructure binaries on an NFS mount, then you must include the suid
option.
The NFS client-side mount options for Oracle Clusterware files (OCR and voting disk files) are:
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectio
Update the /etc/vfstab
file on each node with an entry containing the NFS mount options for your platform. For example, if your platform is x86-64, and you are creating a mount point for Oracle Clusterware files, then update the /etc/vfstab
files with an entry similar to the following:
nfs_server:/vol/grid - /u02/oracle/cwfiles nfs \ rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,noac,forcedirectio
Note that mount point options are different for Oracle software binaries, Oracle Clusterware files (OCR and voting disks), and data files.
To create a mount point for binaries only, provide an entry similar to the following for a binaries mount point:
nfs_server:/vol/bin /u02/oracle/grid nfs \
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,vers=3,suid 0 0
See Also:
My Oracle Support bulletin 359515.1, "Mount Options for Oracle Files When Used with NAS Devices" for the most current information about mount options, available from the following URL:Note:
Refer to your storage vendor documentation for additional information about mount options.If you use NFS mounts, then you must mount NFS volumes used for storing database files with special mount options on each node that has an Oracle RAC instance. When mounting an NFS file system, Oracle recommends that you use the same mount point options that your NAS vendor used when certifying the device. Refer to your device documentation or contact your vendor for information about recommended mount-point options.
Update the /etc/vfstab
file on each node with an entry similar to the following:
nfs_server:/vol/DATA/oradata /u02/oradata nfs\
rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,noac,forcedirectio, vers=3,suid 0 0
The mandatory mount options comprise the minimum set of mount options that you must use while mounting the NFS volumes. These mount options are essential to protect the integrity of the data and to prevent any database corruption. Failure to use these mount options may result in the generation of file access errors. Refer to your operating system or NAS device documentation for more information about the specific options supported on your platform.
See Also:
My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:https://support.oracle.com
Complete the following procedure to enable Direct NFS Client:
Create an oranfstab
file with the following attributes for each NFS server to be accessed using Direct NFS Client:
Server: The NFS server name.
Local: Up to four paths on the database host, specified by IP address or by name, as displayed using the ifconfig
command run on the database host.
Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig
command on the NFS server.
Export: The exported path from the NFS server.
Mount: The corresponding local mount point for the exported volume.
Mnt_timeout: Specifies (in seconds) the time Direct NFS Client should wait for a successful mount before timing out. This parameter is optional. The default timeout is 10 minutes (600
).
Dontroute: Specifies that outgoing messages should not be routed by the operating system, but instead sent using the IP address to which they are bound.
The examples that follow show three possible NFS server entries in oranfstab
. A single oranfstab
can have multiple NFS server entries.
Example 3-1 Using Local and Path NFS Server Entries
The following example uses both local and path. Since they are in different subnets, we do not have to specify dontroute
.
server: MyDataServer1 local: 192.0.2.0 path: 192.0.2.1 local: 192.0.100.0 path: 192.0.100.1 export: /vol/oradata1 mount: /mnt/oradata1
Example 3-2 Using Local and Path in the Same Subnet, with dontroute
The following example shows local and path in the same subnet. dontroute
is specified in this case:
server: MyDataServer2 local: 192.0.2.0 path: 192.0.2.128 local: 192.0.2.1 path: 192.0.2.129 dontroute export: /vol/oradata2 mount: /mnt/oradata2
Example 3-3 Using Names in Place of IP Addresses, with Multiple Exports
server: MyDataServer3 local: LocalPath1 path: NfsPath1 local: LocalPath2 path: NfsPath2 local: LocalPath3 path: NfsPath3 local: LocalPath4 path: NfsPath4 dontroute export: /vol/oradata3 mount: /mnt/oradata3 export: /vol/oradata4 mount: /mnt/oradata4 export: /vol/oradata5 mount: /mnt/oradata5 export: /vol/oradata6 mount: /mnt/oradata6
By default, Direct NFS Client is installed in a disabled state. To enable Direct NFS Client, complete the following steps on each node. If you use a shared Grid home for the cluster, then complete the following steps in the shared Grid home:
Log in as the Oracle Grid Infrastructure installation owner.
Change directory to Grid_home
/rdbms/lib
.
Enter the following commands:
$ make -f ins_rdbms.mk dnfs_on
Use the following instructions to create directories for Oracle Clusterware files. You can also configure shared file systems for the Oracle Database and recovery files.
Note:
For NFS storage, you must complete this procedure only if you want to place the Oracle Clusterware files on a separate file system from the Oracle base directory.To create directories for the Oracle Clusterware files on separate file systems from the Oracle base directory, follow these steps:
If necessary, configure the shared file systems to use and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the df
command to determine the free disk space on each mounted file system.
From the display, identify the file systems to use. Choose a file system with a minimum of 600 MB of free disk space (one OCR and one voting disk, with external redundancy).
If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, grid
or oracle
) has permissions to create directories on the storage location where you plan to install Oracle Clusterware files, then OUI creates the Oracle Clusterware file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on the directory. For example, where the user is oracle
, and the Oracle Clusterware file storage area is cluster
:
# mkdir /mount_point/cluster # chown oracle:oinstall /mount_point/cluster # chmod 775 /mount_point/cluster
Note:
After installation, directories in the installation path for the Oracle Cluster Registry (OCR) files should be owned byroot
, and not writable by any account other than root
.When you have completed creating a subdirectory in the mount point directory, and set the appropriate owner, group, and permissions, you have completed NFS configuration for Oracle Grid Infrastructure.
Use the following instructions to create directories for shared file systems for Oracle Database and recovery files (for example, for an Oracle RAC database).
If necessary, configure the shared file systems and mount them on each node.
Note:
The mount point that you use for the file system must be identical on each node. Ensure that the file systems are configured to mount automatically when a node restarts.Use the df -h
command to determine the free disk space on each mounted file system.
From the display, identify the file systems:
File Type | File System Requirements |
---|---|
Database files | Choose either:
|
Recovery files | Choose a file system with at least 2 GB of free disk space. |
If you are using the same file system for multiple file types, then add the disk space requirements for each type to determine the total disk space requirement.
Note the names of the mount point directories for the file systems that you identified.
If the user performing installation (typically, oracle
) has permissions to create directories on the disks where you plan to install Oracle Database, then DBCA creates the Oracle Database file directory, and the Recovery file directory.
If the user performing installation does not have write access, then you must create these directories manually using commands similar to the following to create the recommended subdirectories in each of the mount point directories and set the appropriate owner, group, and permissions on them:
Database file directory:
# mkdir /mount_point/oradata # chown oracle:oinstall /mount_point/oradata # chmod 775 /mount_point/oradata
Recovery file directory (Fast Recovery Area):
# mkdir /mount_point/fast_recovery_area # chown oracle:oinstall /mount_point/fast_recovery_area # chmod 775 /mount_point/fast_recovery_area
By making members of the oinstall
group owners of these directories, this permits them to be read by multiple Oracle homes, including those with different OSDBA groups.
When you have completed creating subdirectories in each of the mount point directories, and set the appropriate owner, group, and permissions, you have completed NFS configuration for Oracle Database shared storage.
Complete the following steps to disable Direct NFS Client:
Log in as the Oracle Grid Infrastructure installation owner, and disable Direct NFS Client using the following commands, where Grid_home
is the path to the Oracle Grid Infrastructure home:
$ cd Grid_home/rdbms/lib
$ make -f ins_rdbms.mk dnfs_off
Enter these commands on each node in the cluster, or on the shared Grid home if you are using a shared home for the Oracle Grid Infrastructure installation.
Remove the oranfstab
file.
Note:
If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective.Review the following sections to configure storage for Oracle Automatic Storage Management:
This section describes how to configure storage for use with Oracle Automatic Storage Management (Oracle ASM).
To identify the storage requirements for using Oracle ASM, you must determine how many devices and the amount of free disk space that you require. To complete this task, follow these steps:
Determine whether you want to use Oracle ASM for Oracle Clusterware files (OCR and voting disks), Oracle Database files, recovery files, or all files except for Oracle Clusterware or Oracle Database binaries. Oracle Database files include data files, control files, redo log files, the server parameter file, and the password file.
Note:
You do not have to use the same storage mechanism for Oracle Clusterware, Oracle Database files and recovery files. You can use a shared file system for one file type and Oracle ASM for the other.If you choose to enable automated backups and you do not have a shared file system available, then you must choose Oracle ASM for recovery file storage.
If you enable automated backups during the installation, then you can select Oracle ASM as the storage mechanism for recovery files by specifying an Oracle Automatic Storage Management disk group for the Fast Recovery Area. If you select a noninteractive installation mode, then by default it creates one disk group and stores the OCR and voting disk files there. If you want to have any other disk groups for use in a subsequent database install, then you can choose interactive mode, or run ASMCA (or a command line tool) to create the appropriate disk groups before starting the database install.
Choose the Oracle ASM redundancy level to use for the Oracle ASM disk group.
The redundancy level that you choose for the Oracle ASM disk group determines how Oracle ASM mirrors files in the disk group and determines the number of disks and amount of free disk space that you require, as follows:
External redundancy
An external redundancy disk group requires a minimum of one disk device. The effective disk space in an external redundancy disk group is the sum of the disk space in all of its devices.
For Oracle Clusterware files, External redundancy disk groups provide 1 voting disk file, and 1 OCR, with no copies. You must use an external technology to provide mirroring for high availability.
Because Oracle ASM does not mirror data in an external redundancy disk group, Oracle recommends that you use external redundancy with storage devices such as RAID, or other similar devices that provide their own data protection mechanisms.
Normal redundancy
In a normal redundancy disk group, to increase performance and reliability, Oracle ASM by default uses two-way mirroring. A normal redundancy disk group requires a minimum of two disk devices (or two failure groups). The effective disk space in a normal redundancy disk group is half the sum of the disk space in all of its devices.
For Oracle Clusterware files, Normal redundancy disk groups provide 3 voting disk files, 1 OCR and 2 copies (one primary and one secondary mirror). With normal redundancy, the cluster can survive the loss of one failure group.
For most installations, Oracle recommends that you select normal redundancy.
High redundancy
In a high redundancy disk group, Oracle ASM uses three-way mirroring to increase performance and provide the highest level of reliability. A high redundancy disk group requires a minimum of three disk devices (or three failure groups). The effective disk space in a high redundancy disk group is one-third the sum of the disk space in all of its devices.
For Oracle Clusterware files, High redundancy disk groups provide 5 voting disk files, 1 OCR and 3 copies (one primary and two secondary mirrors). With high redundancy, the cluster can survive the loss of two failure groups.
While high redundancy disk groups do provide a high level of data protection, you should consider the greater cost of additional storage devices before deciding to select high redundancy disk groups.
Determine the total amount of disk space that you require for Oracle Clusterware files, and for the database files and recovery files.
Use Table 3-4 and Table 3-5 to determine the minimum number of disks and the minimum disk space requirements for installing Oracle Clusterware files, and installing the starter database, where you have voting disks in a separate disk group:
Table 3-4 Total Oracle Clusterware Storage Space Required by Redundancy Type
Redundancy Level | Minimum Number of Disks | Oracle Cluster Registry (OCR) Files | Voting Disk Files | Both File Types |
---|---|---|---|---|
External |
1 |
300 MB |
300 MB |
600 MB |
Normal |
3 |
600 MB |
900 MB |
1.5 GBFoot 1 |
High |
5 |
900 MB |
1.5 GB |
2.4 GB |
Footnote 1 If you create a disk group during installation, then it must be at least 2 GB.
Note:
If the voting disk files are in a disk group, be aware that disk groups with Oracle Clusterware files (OCR and voting disks) have a higher minimum number of failure groups than other disk groups.If you create a disk group as part of the installation in order to install the OCR and voting disk files, then the installer requires that you create these files on a disk group with at least 2 GB of available space.
A quorum failure group is a special type of failure group and disks in these failure groups do not contain user data. A quorum failure group is not considered when determining redundancy requirements in respect to storing user data. However, a quorum failure group counts when mounting a disk group.
Determine an allocation unit size. Every Oracle ASM disk is divided into allocation units (AU). An allocation unit is the fundamental unit of allocation within a disk group. You can select the AU Size value from 1, 2, 4, 8, 16, 32 or 64 MB, depending on the specific disk group compatibility level. The default value is set to 1 MB.
For Oracle Clusterware installations, you must also add additional disk space for the Oracle ASM metadata. You can use the following formula to calculate the disk space requirements (in MB) for OCR and voting disk files, and the Oracle ASM metadata:
total = [2 * ausize * disks] + [redundancy * (ausize * (nodes * (clients + 1) + 30) + (64 * nodes) + 533)]
Where:
redundancy = Number of mirrors: external = 1, normal = 2, high = 3.
ausize = Metadata AU size in megabytes.
nodes = Number of nodes in cluster.
clients - Number of database instances for each node.
disks - Number of disks in disk group.
For example, for a four-node Oracle RAC installation, using three disks in a normal redundancy disk group, you require an additional X MB of space:
[2 * 1 * 3] + [2 * (1 * (4 * (4 + 1)+ 30)+ (64 * 4)+ 533)] = 1684 MB
To ensure high availability of Oracle Clusterware files on Oracle ASM, for a normal redundancy disk group, as a general rule for most installations, you must have at least 2 GB of disk space for Oracle Clusterware files in three separate failure groups, with at least three physical disks. To ensure that the effective disk space to create Oracle Clusterware files is 2 GB, best practice suggests that you ensure at least 2.1 GB of capacity for each disk, with a total capacity of at least 6.3 GB for three disks.
Optionally, identify failure groups for the Oracle ASM disk group devices.
If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note:
Define custom failure groups after installation, using the GUI tool ASMCA, the command line toolasmcmd
, or SQL commands.
If you define custom failure groups, then for failure groups containing database files only, you must specify a minimum of two failure groups for normal redundancy disk groups and three failure groups for high redundancy disk groups.
For failure groups containing database files and clusterware files, including voting disks, you must specify a minimum of three failure groups for normal redundancy disk groups, and five failure groups for high redundancy disk groups.
Disk groups containing voting files must have at least 3 failure groups for normal redundancy or at least 5 failure groups for high redundancy. Otherwise, the minimum is 2 and 3 respectively. The minimum number of failure groups applies whether or not they are custom failure groups.
If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group device. Each disk group device should be on a separate physical disk.
Although you can specify a logical volume as a device in an Oracle ASM disk group, Oracle does not recommend their use because it adds a layer of complexity that is unnecessary with Oracle ASM. In addition, Oracle RAC requires a cluster logical volume manager in case you decide to use a logical volume with Oracle ASM and Oracle RAC.
Oracle recommends that if you choose to use a logical volume manager, then use the logical volume manager to represent a single LUN without striping or mirroring, so that you can minimize the impact of the additional storage layer.
If you have a certified NAS storage device, then you can create zero-padded files in an NFS mounted directory and use those files as disk devices in an Oracle ASM disk group.
To create these files, follow these steps:
If necessary, create an exported directory for the disk group files on the NAS device.
Refer to the NAS device documentation for more information about completing this step.
Switch user to root
.
Create a mount point directory on the local system. For example:
# mkdir -p /mnt/oracleasm
To ensure that the NFS file system is mounted when the system restarts, add an entry for the file system in the mount file /etc/vfstab
.
See Also:
My Oracle Support note 359515.1 for updated NAS mount option information, available at the following URL:https://support.oracle.com
For more information about editing the mount file for the operating system, refer to the man
pages. For more information about recommended mount options, refer to the section Section 3.2.8, "Checking NFS Mount and Buffer Size Parameters for Oracle RAC".
Enter a command similar to the following to mount the NFS file system on the local system:
# mount /mnt/oracleasm
Choose a name for the disk group to create. For example: sales1
.
Create a directory for the files on the NFS file system, using the disk group name as the directory name. For example:
# mkdir /mnt/oracleasm/nfsdg
Use commands similar to the following to create the required number of zero-padded files in this directory:
# dd if=/dev/zero of=/mnt/oracleasm/nfsdg/disk1 bs=1024k count=1000
This example creates 1 GB files on the NFS file system. You must create one, two, or three files respectively to create an external, normal, or high redundancy disk group.
Enter commands similar to the following to change the owner, group, and permissions on the directory and files that you created, where the installation owner is grid
, and the OSASM group is asmadmin
:
# chown -R grid:asmadmin /mnt/oracleasm # chmod -R 660 /mnt/oracleasm
If you plan to install Oracle RAC or a standalone Oracle Database, then during installation, edit the Oracle ASM disk discovery string to specify a regular expression that matches the file names you created. For example:
/mnt/oracleasm/sales1/
Select from the following choices to store either database or recovery files in an existing Oracle ASM disk group, depending on installation method:
If you select an installation method that runs Database Configuration Assistant in interactive mode, then you can decide whether you want to create a disk group, or to use an existing one.
The same choice is available to you if you use Database Configuration Assistant after the installation to create a database.
If you select an installation method that runs Database Configuration Assistant in noninteractive mode, then you must choose an existing disk group for the new database; you cannot create a disk group. However, you can add disk devices to an existing disk group if it has insufficient free space for your requirements.
Note:
The Oracle ASM instance that manages the existing disk group can be running in a different Oracle home directory.To determine if an existing Oracle ASM disk group exists, or to determine if there is sufficient disk space in a disk group, you can use the ASM command line tool (asmcmd
), Oracle Enterprise Manager Grid Control or Database Control. Alternatively, you can use the following procedure:
View the contents of the oratab
file to determine if an Oracle ASM instance is configured on the system:
$ more /var/opt/oracle/oratab
If an Oracle ASM instance is configured on the system, then the oratab
file should contain a line similar to the following:
+ASM2:oracle_home_path
In this example, +ASM2
is the system identifier (SID) of the Oracle ASM instance, with the node number appended, and oracle_home_path
is the Oracle home directory where it is installed. By convention, the SID for an Oracle ASM instance begins with a plus sign.
Set the ORACLE_SID
and ORACLE_HOME
environment variables to specify the appropriate values for the Oracle ASM instance.
Connect to the Oracle ASM instance and start the instance if necessary:
$ $ORACLE_HOME/bin/asmcmd ASMCMD> startup
Enter one of the following commands to view the existing disk groups, their redundancy level, and the amount of free disk space in each one:
ASMCMD> lsdg
or:
$ORACLE_HOME/bin/asmcmd -p lsdg
From the output, identify a disk group with the appropriate redundancy level and note the free space that it contains.
If necessary, install or identify the additional disk devices required to meet the storage requirements listed in the previous section.
Note:
If you are adding devices to an existing disk group, then Oracle recommends that you use devices that have the same size and performance characteristics as the existing devices in that disk group.You can configure raw partitions for use as Oracle ASM disk groups. To use ASM with raw partitions, you must create sufficient partitions for your data files, and then bind the partitions to raw devices. Make a list of the raw device names you create for the data files, and have the list available during database installation.
Use the following procedure to configure disks:
If necessary, install the disks that you intend to use for the disk group and restart the system.
Identify or create the disk slices (partitions) that you want to include in the Oracle ASM disk group:
To ensure that the disks are available, enter the following command:
# /usr/sbin/format
The output from this command is similar to the following:
AVAILABLE DISK SELECTIONS: 0. c0t0d0 <ST34321A cyl 8892 alt 2 hd 15 sec 63> /pci@1f,0/pci@1,1/ide@3/dad@0,0 1. c1t5d0 <SUN9.0G cyl 4924 alt 2 hd 27 sec 133> /pci@1f,0/pci@1/scsi@1/sd@5,0
This command displays information about each disk attached to the system, including the device name (c
x
t
y
d
z
).
Enter the number corresponding to the disk that you want to use.
Use the fdisk
command to create an Oracle Solaris partition on the disk if one does not already exist.
Oracle Solaris fdisk
partitions must start at cylinder 1, not cylinder 0. If you create an fdisk partition, then you must label the disk before continuing.
Enter the partition
command, followed by the print
command to display the partition table for the disk that you want to use.
If necessary, create a single whole-disk slice, starting at cylinder 1.
Note:
To prevent Oracle ASM from overwriting the partition table, you cannot use slices that start at cylinder 0 (for example, slice 2).Make a note of the number of the slice that you want to use.
If you modified a partition table or created a new one, then enter the label command to write the partition table and label to the disk.
Enter q to return to the format menu.
If you have finished creating slices, then enter q to quit from the format utility. Otherwise, enter the disk command to select a new disk and repeat steps b to g to create or identify the slices on that disks.
If you plan to use existing slices, then enter the following command to verify that they are not mounted as file systems:
# df -h
This command displays information about the slices on disk devices that are mounted as file systems. The device name for a slice includes the disk device name followed by the slice number. For example: cxtydz
sn
, where sn
is the slice number.
Enter commands similar to the following on every node to change the owner, group, and permissions on the character raw device file for each disk slice that you want to add to a disk group, where grid
is the Oracle Grid Infrastructure installation owner, and asmadmin
is the OSASM group:
# chown grid:asmadmin /dev/rdsk/cxtydzs6 # chmod 660 /dev/rdsk/cxtydzs6
In this example, the device name specifies slice 6.
Note:
If you are using a multi-pathing disk driver with Oracle Automatic Storage Management, then ensure that you set the permissions only on the correct logical device name for the disk.Review the following sections to configure Oracle Automatic Storage Management (Oracle ASM) storage for Oracle Clusterware and Oracle Database Files:
The following section describes how to identify existing disk groups and determine the free disk space that they contain.
Optionally, identify failure groups for the Oracle ASM disk group devices.
If you intend to use a normal or high redundancy disk group, then you can further protect your database against hardware failure by associating a set of disk devices in a custom failure group. By default, each device comprises its own failure group. However, if two disk devices in a normal redundancy disk group are attached to the same SCSI controller, then the disk group becomes unavailable if the controller fails. The controller in this example is a single point of failure.
To protect against failures of this type, you could use two SCSI controllers, each with two disks, and define a failure group for the disks attached to each controller. This configuration would enable the disk group to tolerate the failure of one SCSI controller.
Note:
If you define custom failure groups, then you must specify a minimum of two failure groups for normal redundancy and three failure groups for high redundancy.If you are sure that a suitable disk group does not exist on the system, then install or identify appropriate disk devices to add to a new disk group. Use the following guidelines when identifying appropriate disk devices:
All of the devices in an Oracle ASM disk group should be the same size and have the same performance characteristics.
Do not specify multiple partitions on a single physical disk as a disk group device. Oracle ASM expects each disk group device to be on a separate physical disk.
Although you can specify logical volumes as devices in an Oracle ASM disk group, Oracle does not recommend their use. Non-shared logical volumes are not supported with Oracle RAC. If you want to use logical volumes for your Oracle RAC database, then you must use shared logical volumes created by a cluster-aware logical volume manager.
Oracle ACFS is installed as part of an Oracle Grid Infrastructure installation (Oracle Clusterware and Oracle Automatic Storage Management) for 11g release 2 (11.2). You can configure Oracle ACFS for a database home, or use ASMCA to configure ACFS as a general purpose file system.
Note:
Oracle ACFS is supported on Oracle Solaris 11 and Oracle Solaris 10 Update 6 and later updates. All other Oracle Solaris releases supported with Oracle Grid Infrastructure for a Cluster 11g release 2 (11.2) are not supported for Oracle ACFS.See Also:
For current information on platforms and releases that support Oracle ACFS, refer to My Oracle Support Note 1369107.1 at the following URL:https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&id=1369107.1
To configure Oracle ACFS for an Oracle Database home for an Oracle RAC database:
Install Oracle Grid Infrastructure for a cluster (Oracle Clusterware and Oracle Automatic Storage Management)
Change directory to the Oracle Grid Infrastructure home. For example:
$ cd /u01/app/11.2.0/grid
Ensure that the Oracle Grid Infrastructure installation owner has read and write permissions on the storage mount point you want to use. For example, if you want to use the mount point /u02/acfsmounts/
:
$ ls -l /u02/acfsmounts
Start Oracle ASM Configuration Assistant as the grid installation owner. For example:
./asmca
The Configure ASM: ASM Disk Groups page shows you the Oracle ASM disk group you created during installation. Click the ASM Cluster File Systems tab.
On the ASM Cluster File Systems page, right-click the Data disk, then select Create ACFS for Database Home.
In the Create ACFS Hosted Database Home window, enter the following information:
Database Home ADVM Volume Device Name: Enter the name of the database home. The name must be unique in your enterprise. For example: dbase_01
Database Home Mount Point: Enter the directory path for the mount point. For example: /u02/acfsmounts/dbase_01
Make a note of this mount point for future reference.
Database Home Size (GB): Enter in gigabytes the size you want the database home to be.
Database Home Owner Name: Enter the name of the Oracle Database installation owner you plan to use to install the database. For example: oracle
1
Database Home Owner Group: Enter the OSDBA group whose members you plan to provide when you install the database. Members of this group are given operating system authentication for the SYSDBA privileges on the database. For example: dba1
Click OK when you have completed your entries.
Run the script generated by Oracle ASM Configuration Assistant as a privileged user (root
). On an Oracle Clusterware environment, the script registers the ACFS as a resource managed by Oracle Clusterware. Registering ACFS as a resource helps Oracle Clusterware to mount the ACFS automatically in proper order when ACFS is used for an Oracle RAC database Home.
During Oracle RAC installation, ensure that you or the DBA who installs Oracle RAC selects for the Oracle home the mount point you provided in the Database Home Mountpoint field (in the preceding example, /u02/acfsmounts/dbase_01
).
See Also:
Oracle Database Storage Administrator's Guide for more information about configuring and managing your storage with Oracle ACFSIf you have an Oracle ASM installation from a prior release installed on your server, or in an existing Oracle Clusterware installation, then you can use Oracle Automatic Storage Management Configuration Assistant (ASMCA, located in the path Grid_home
/bin
) to upgrade the existing Oracle ASM instance to 11g release 2 (11.2), and subsequently configure failure groups, Oracle ASM volumes and Oracle Automatic Storage Management Cluster File System (Oracle ACFS).
Note:
You must first shut down all database instances and applications on the node with the existing Oracle ASM instance before upgrading it.During installation, if you are upgrading from an Oracle ASM release prior to 11.2, and you chose to use Oracle ASM and ASMCA detects that there is a prior Oracle ASM version installed in another Oracle ASM home, then after installing the Oracle ASM 11g release 2 (11.2) binaries, you can start ASMCA to upgrade the existing Oracle ASM instance. You can then choose to configure an Oracle ACFS deployment by creating Oracle ASM volumes and using the upgraded Oracle ASM to create the Oracle ACFS.
If you are upgrading from Oracle ASM 11g release 2 (11.2.0.1) or later, then Oracle ASM is always upgraded with Oracle Grid Infrastructure as part of the rolling upgrade, and ASMCA is started by the root scripts during upgrade. ASMCA cannot perform a separate upgrade of Oracle ASM from release 11.2.0.1 to 11.2.0.2.
On an existing Oracle Clusterware or Oracle RAC installation, if the prior version of Oracle ASM instances on all nodes is 11g release 1, then you are provided with the option to perform a rolling upgrade of Oracle ASM instances. If the prior version of Oracle ASM instances on an Oracle RAC installation are from a release prior to 11g release 1, then rolling upgrades cannot be performed. Oracle ASM on all nodes will be upgraded to 11g release 2 (11.2).
With the release of Oracle Database 11g release 2 (11.2) and Oracle RAC 11g release 2 (11.2), using Database Configuration Assistant or the installer to store Oracle Clusterware or Oracle Database files directly on block or raw devices is not supported.
If you intend to upgrade an existing Oracle RAC database, or an Oracle RAC database with Oracle ASM instances, then you can use an existing raw or block device partition, and perform a rolling upgrade of your existing installation. Performing a new installation using block or raw devices is not allowed.